Test Report: Docker_Linux_crio 22179

                    
                      505b1c9a8fd96db2c5d776a2dde7c3c6efd2d048:2025-12-21:42914
                    
                

Test fail (26/419)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable volcano --alsologtostderr -v=1: exit status 11 (243.599285ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:00.497928   22478 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:00.498396   22478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:00.498419   22478 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:00.498427   22478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:00.498879   22478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:00.499440   22478 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:00.499803   22478 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:00.499823   22478 addons.go:622] checking whether the cluster is paused
	I1221 19:48:00.499899   22478 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:00.499911   22478 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:00.500297   22478 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:00.518820   22478 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:00.518879   22478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:00.535622   22478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:00.630429   22478 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:00.630509   22478 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:00.658382   22478 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:00.658417   22478 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:00.658421   22478 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:00.658425   22478 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:00.658428   22478 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:00.658432   22478 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:00.658435   22478 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:00.658438   22478 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:00.658443   22478 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:00.658451   22478 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:00.658455   22478 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:00.658459   22478 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:00.658464   22478 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:00.658468   22478 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:00.658473   22478 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:00.658484   22478 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:00.658489   22478 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:00.658494   22478 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:00.658497   22478 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:00.658500   22478 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:00.658503   22478 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:00.658505   22478 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:00.658508   22478 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:00.658511   22478 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:00.658513   22478 cri.go:96] found id: ""
	I1221 19:48:00.658572   22478 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:00.672202   22478 out.go:203] 
	W1221 19:48:00.673633   22478 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:00.673657   22478 out.go:285] * 
	* 
	W1221 19:48:00.676703   22478 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:00.677954   22478 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.735789ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002206291s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002187496s
addons_test.go:394: (dbg) Run:  kubectl --context addons-734405 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-734405 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-734405 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.34024117s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable registry --alsologtostderr -v=1: exit status 11 (253.381871ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:24.040492   24452 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:24.040792   24452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:24.040804   24452 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:24.040810   24452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:24.040994   24452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:24.041277   24452 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:24.041717   24452 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:24.041746   24452 addons.go:622] checking whether the cluster is paused
	I1221 19:48:24.041903   24452 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:24.041926   24452 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:24.042423   24452 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:24.061622   24452 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:24.061681   24452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:24.079271   24452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:24.177312   24452 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:24.177389   24452 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:24.213090   24452 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:24.213113   24452 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:24.213119   24452 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:24.213124   24452 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:24.213129   24452 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:24.213133   24452 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:24.213136   24452 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:24.213138   24452 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:24.213141   24452 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:24.213148   24452 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:24.213153   24452 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:24.213158   24452 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:24.213163   24452 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:24.213168   24452 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:24.213176   24452 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:24.213189   24452 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:24.213194   24452 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:24.213209   24452 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:24.213217   24452 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:24.213243   24452 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:24.213254   24452 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:24.213258   24452 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:24.213263   24452 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:24.213267   24452 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:24.213272   24452 cri.go:96] found id: ""
	I1221 19:48:24.213331   24452 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:24.228529   24452 out.go:203] 
	W1221 19:48:24.229912   24452 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:24.229936   24452 out.go:285] * 
	* 
	W1221 19:48:24.233093   24452 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:24.234418   24452 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.79s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.624746ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-734405
addons_test.go:334: (dbg) Run:  kubectl --context addons-734405 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (233.644743ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:29.732811   25824 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:29.732951   25824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:29.732961   25824 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:29.732965   25824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:29.733150   25824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:29.733405   25824 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:29.733699   25824 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:29.733716   25824 addons.go:622] checking whether the cluster is paused
	I1221 19:48:29.733792   25824 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:29.733803   25824 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:29.734140   25824 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:29.750980   25824 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:29.751019   25824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:29.766854   25824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:29.863715   25824 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:29.863811   25824 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:29.892103   25824 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:29.892130   25824 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:29.892145   25824 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:29.892150   25824 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:29.892156   25824 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:29.892161   25824 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:29.892165   25824 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:29.892170   25824 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:29.892174   25824 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:29.892182   25824 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:29.892191   25824 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:29.892194   25824 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:29.892197   25824 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:29.892199   25824 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:29.892202   25824 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:29.892207   25824 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:29.892210   25824 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:29.892213   25824 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:29.892217   25824 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:29.892235   25824 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:29.892245   25824 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:29.892253   25824 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:29.892257   25824 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:29.892262   25824 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:29.892266   25824 cri.go:96] found id: ""
	I1221 19:48:29.892306   25824 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:29.905692   25824 out.go:203] 
	W1221 19:48:29.906802   25824 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:29.906817   25824 out.go:285] * 
	* 
	W1221 19:48:29.909793   25824 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:29.910907   25824 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (149.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-734405 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-734405 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-734405 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [666fd15c-8584-41e5-a4a8-67ac1fc9a92d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [666fd15c-8584-41e5-a4a8-67ac1fc9a92d] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002950252s
I1221 19:48:35.258372   12711 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.438488983s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-734405 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-734405
helpers_test.go:244: (dbg) docker inspect addons-734405:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b",
	        "Created": "2025-12-21T19:46:47.567938506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T19:46:47.602137045Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/hosts",
	        "LogPath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b-json.log",
	        "Name": "/addons-734405",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-734405:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-734405",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b",
	                "LowerDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-734405",
	                "Source": "/var/lib/docker/volumes/addons-734405/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-734405",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-734405",
	                "name.minikube.sigs.k8s.io": "addons-734405",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3f3b83537715c70ffa6b8f14ff988ae577eac2f8ef7a89766945f782ca7b803e",
	            "SandboxKey": "/var/run/docker/netns/3f3b83537715",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-734405": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8cb8a005cb45712daf0fdc43f6bb5ec21f904d698b1f455f3203b92bae54f643",
	                    "EndpointID": "fa34e51fd9c93c6f6d93729555a31d9217b7c42bdff558507337bb24e9eda25b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:35:ab:b2:29:04",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-734405",
	                        "f342f561decc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-734405 -n addons-734405
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-734405 logs -n 25: (1.098340878s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-301733 --alsologtostderr --binary-mirror http://127.0.0.1:43353 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-301733 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ -p binary-mirror-301733                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-301733 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ addons  │ disable dashboard -p addons-734405                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ addons  │ enable dashboard -p addons-734405                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ start   │ -p addons-734405 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-734405 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ enable headlamp -p addons-734405 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ ip      │ addons-734405 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-734405 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ ssh     │ addons-734405 ssh cat /opt/local-path-provisioner/pvc-c9a8c150-674a-4b88-96eb-4f04de96494b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-734405 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-734405                                                                                                                                                                                                                                                                                                                                                                                           │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-734405 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ ssh     │ addons-734405 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ ip      │ addons-734405 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-734405        │ jenkins │ v1.37.0 │ 21 Dec 25 19:50 UTC │ 21 Dec 25 19:50 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:24.993009   14485 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:24.993269   14485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:24.993279   14485 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:24.993286   14485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:24.993478   14485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:46:24.994068   14485 out.go:368] Setting JSON to false
	I1221 19:46:24.994822   14485 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1734,"bootTime":1766344651,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:24.994870   14485 start.go:143] virtualization: kvm guest
	I1221 19:46:24.996530   14485 out.go:179] * [addons-734405] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:24.997947   14485 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:46:24.997941   14485 notify.go:221] Checking for updates...
	I1221 19:46:25.000136   14485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:25.001621   14485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:46:25.002814   14485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:46:25.004069   14485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:46:25.005427   14485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:46:25.006643   14485 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:46:25.028740   14485 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:46:25.028870   14485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:25.082959   14485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-21 19:46:25.074335609 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:25.083053   14485 docker.go:319] overlay module found
	I1221 19:46:25.084689   14485 out.go:179] * Using the docker driver based on user configuration
	I1221 19:46:25.085754   14485 start.go:309] selected driver: docker
	I1221 19:46:25.085766   14485 start.go:928] validating driver "docker" against <nil>
	I1221 19:46:25.085777   14485 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:46:25.086331   14485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:25.142090   14485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-21 19:46:25.132633111 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:25.142277   14485 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 19:46:25.142484   14485 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 19:46:25.144097   14485 out.go:179] * Using Docker driver with root privileges
	I1221 19:46:25.145146   14485 cni.go:84] Creating CNI manager for ""
	I1221 19:46:25.145212   14485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 19:46:25.145251   14485 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 19:46:25.145318   14485 start.go:353] cluster config:
	{Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1221 19:46:25.146640   14485 out.go:179] * Starting "addons-734405" primary control-plane node in "addons-734405" cluster
	I1221 19:46:25.147631   14485 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 19:46:25.148718   14485 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 19:46:25.149770   14485 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:25.149793   14485 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 19:46:25.149799   14485 cache.go:65] Caching tarball of preloaded images
	I1221 19:46:25.149800   14485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 19:46:25.149888   14485 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 19:46:25.149900   14485 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 19:46:25.150168   14485 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/config.json ...
	I1221 19:46:25.150188   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/config.json: {Name:mk3e65bc3be6a489d858bc2169da4b8071c2bfb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:25.165083   14485 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1221 19:46:25.165185   14485 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1221 19:46:25.165200   14485 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1221 19:46:25.165204   14485 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1221 19:46:25.165210   14485 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1221 19:46:25.165217   14485 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 from local cache
	I1221 19:46:38.946212   14485 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 from cached tarball
	I1221 19:46:38.946269   14485 cache.go:243] Successfully downloaded all kic artifacts
	I1221 19:46:38.946316   14485 start.go:360] acquireMachinesLock for addons-734405: {Name:mk30b118a4bdc15e39537bd7efedc75e73779231 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 19:46:38.946421   14485 start.go:364] duration metric: took 84.092µs to acquireMachinesLock for "addons-734405"
	I1221 19:46:38.946444   14485 start.go:93] Provisioning new machine with config: &{Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 19:46:38.946524   14485 start.go:125] createHost starting for "" (driver="docker")
	I1221 19:46:39.068166   14485 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1221 19:46:39.068428   14485 start.go:159] libmachine.API.Create for "addons-734405" (driver="docker")
	I1221 19:46:39.068463   14485 client.go:173] LocalClient.Create starting
	I1221 19:46:39.068605   14485 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem
	I1221 19:46:39.103936   14485 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem
	I1221 19:46:39.177705   14485 cli_runner.go:164] Run: docker network inspect addons-734405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 19:46:39.195267   14485 cli_runner.go:211] docker network inspect addons-734405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 19:46:39.195335   14485 network_create.go:284] running [docker network inspect addons-734405] to gather additional debugging logs...
	I1221 19:46:39.195355   14485 cli_runner.go:164] Run: docker network inspect addons-734405
	W1221 19:46:39.210762   14485 cli_runner.go:211] docker network inspect addons-734405 returned with exit code 1
	I1221 19:46:39.210789   14485 network_create.go:287] error running [docker network inspect addons-734405]: docker network inspect addons-734405: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-734405 not found
	I1221 19:46:39.210805   14485 network_create.go:289] output of [docker network inspect addons-734405]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-734405 not found
	
	** /stderr **
	I1221 19:46:39.210982   14485 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 19:46:39.226924   14485 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc0530}
	I1221 19:46:39.226956   14485 network_create.go:124] attempt to create docker network addons-734405 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1221 19:46:39.227005   14485 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-734405 addons-734405
	I1221 19:46:39.432629   14485 network_create.go:108] docker network addons-734405 192.168.49.0/24 created
	I1221 19:46:39.432657   14485 kic.go:121] calculated static IP "192.168.49.2" for the "addons-734405" container
	I1221 19:46:39.432733   14485 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 19:46:39.448411   14485 cli_runner.go:164] Run: docker volume create addons-734405 --label name.minikube.sigs.k8s.io=addons-734405 --label created_by.minikube.sigs.k8s.io=true
	I1221 19:46:39.512158   14485 oci.go:103] Successfully created a docker volume addons-734405
	I1221 19:46:39.512239   14485 cli_runner.go:164] Run: docker run --rm --name addons-734405-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-734405 --entrypoint /usr/bin/test -v addons-734405:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1221 19:46:43.723408   14485 cli_runner.go:217] Completed: docker run --rm --name addons-734405-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-734405 --entrypoint /usr/bin/test -v addons-734405:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib: (4.211128594s)
	I1221 19:46:43.723437   14485 oci.go:107] Successfully prepared a docker volume addons-734405
	I1221 19:46:43.723515   14485 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:43.723529   14485 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 19:46:43.723601   14485 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-734405:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 19:46:47.500107   14485 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-734405:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.776467341s)
	I1221 19:46:47.500144   14485 kic.go:203] duration metric: took 3.776610597s to extract preloaded images to volume ...
	W1221 19:46:47.500297   14485 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1221 19:46:47.500348   14485 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1221 19:46:47.500404   14485 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 19:46:47.552778   14485 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-734405 --name addons-734405 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-734405 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-734405 --network addons-734405 --ip 192.168.49.2 --volume addons-734405:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1221 19:46:47.830782   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Running}}
	I1221 19:46:47.848551   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:46:47.865280   14485 cli_runner.go:164] Run: docker exec addons-734405 stat /var/lib/dpkg/alternatives/iptables
	I1221 19:46:47.911527   14485 oci.go:144] the created container "addons-734405" has a running status.
	I1221 19:46:47.911556   14485 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa...
	I1221 19:46:47.992295   14485 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 19:46:48.015900   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:46:48.032667   14485 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 19:46:48.032691   14485 kic_runner.go:114] Args: [docker exec --privileged addons-734405 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 19:46:48.099488   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:46:48.124455   14485 machine.go:94] provisionDockerMachine start ...
	I1221 19:46:48.124571   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:48.146385   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:48.146732   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:48.146751   14485 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 19:46:48.147967   14485 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38330->127.0.0.1:32768: read: connection reset by peer
	I1221 19:46:51.280660   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-734405
	
	I1221 19:46:51.280687   14485 ubuntu.go:182] provisioning hostname "addons-734405"
	I1221 19:46:51.280749   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.297672   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:51.297886   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:51.297898   14485 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-734405 && echo "addons-734405" | sudo tee /etc/hostname
	I1221 19:46:51.440349   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-734405
	
	I1221 19:46:51.440427   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.457308   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:51.457542   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:51.457566   14485 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-734405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-734405/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-734405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 19:46:51.590291   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 19:46:51.590333   14485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 19:46:51.590354   14485 ubuntu.go:190] setting up certificates
	I1221 19:46:51.590369   14485 provision.go:84] configureAuth start
	I1221 19:46:51.590418   14485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-734405
	I1221 19:46:51.607414   14485 provision.go:143] copyHostCerts
	I1221 19:46:51.607490   14485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 19:46:51.607594   14485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 19:46:51.607646   14485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 19:46:51.607695   14485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.addons-734405 san=[127.0.0.1 192.168.49.2 addons-734405 localhost minikube]
	I1221 19:46:51.665172   14485 provision.go:177] copyRemoteCerts
	I1221 19:46:51.665253   14485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 19:46:51.665294   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.683101   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:51.778740   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 19:46:51.796257   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1221 19:46:51.812326   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 19:46:51.827762   14485 provision.go:87] duration metric: took 237.37976ms to configureAuth
	I1221 19:46:51.827791   14485 ubuntu.go:206] setting minikube options for container-runtime
	I1221 19:46:51.827950   14485 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:46:51.828043   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.844642   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:51.844844   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:51.844859   14485 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 19:46:52.107443   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 19:46:52.107466   14485 machine.go:97] duration metric: took 3.982985531s to provisionDockerMachine
	I1221 19:46:52.107479   14485 client.go:176] duration metric: took 13.039007838s to LocalClient.Create
	I1221 19:46:52.107506   14485 start.go:167] duration metric: took 13.039079196s to libmachine.API.Create "addons-734405"
	I1221 19:46:52.107518   14485 start.go:293] postStartSetup for "addons-734405" (driver="docker")
	I1221 19:46:52.107532   14485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 19:46:52.107592   14485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 19:46:52.107643   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.124968   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.222393   14485 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 19:46:52.225632   14485 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 19:46:52.225654   14485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 19:46:52.225663   14485 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 19:46:52.225722   14485 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 19:46:52.225750   14485 start.go:296] duration metric: took 118.224774ms for postStartSetup
	I1221 19:46:52.226028   14485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-734405
	I1221 19:46:52.243062   14485 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/config.json ...
	I1221 19:46:52.243331   14485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 19:46:52.243372   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.260291   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.353370   14485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 19:46:52.357588   14485 start.go:128] duration metric: took 13.411046704s to createHost
	I1221 19:46:52.357615   14485 start.go:83] releasing machines lock for "addons-734405", held for 13.411183076s
	I1221 19:46:52.357673   14485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-734405
	I1221 19:46:52.375593   14485 ssh_runner.go:195] Run: cat /version.json
	I1221 19:46:52.375637   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.375676   14485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 19:46:52.375735   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.393927   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.394207   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.486293   14485 ssh_runner.go:195] Run: systemctl --version
	I1221 19:46:52.538541   14485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 19:46:52.571399   14485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 19:46:52.575686   14485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 19:46:52.575749   14485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 19:46:52.599848   14485 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 19:46:52.599870   14485 start.go:496] detecting cgroup driver to use...
	I1221 19:46:52.599899   14485 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 19:46:52.599945   14485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 19:46:52.614887   14485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 19:46:52.626292   14485 docker.go:218] disabling cri-docker service (if available) ...
	I1221 19:46:52.626355   14485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 19:46:52.641610   14485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 19:46:52.657577   14485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 19:46:52.736346   14485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 19:46:52.818380   14485 docker.go:234] disabling docker service ...
	I1221 19:46:52.818433   14485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 19:46:52.835383   14485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 19:46:52.846941   14485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 19:46:52.925835   14485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 19:46:53.003549   14485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 19:46:53.015001   14485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 19:46:53.027628   14485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 19:46:53.027687   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.036845   14485 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 19:46:53.036904   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.044690   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.052464   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.060376   14485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 19:46:53.067525   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.075408   14485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.087256   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.094903   14485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 19:46:53.101259   14485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1221 19:46:53.101310   14485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1221 19:46:53.111993   14485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 19:46:53.119773   14485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:46:53.195750   14485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 19:46:53.321793   14485 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 19:46:53.321866   14485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 19:46:53.325747   14485 start.go:564] Will wait 60s for crictl version
	I1221 19:46:53.325802   14485 ssh_runner.go:195] Run: which crictl
	I1221 19:46:53.329286   14485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 19:46:53.353176   14485 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 19:46:53.353306   14485 ssh_runner.go:195] Run: crio --version
	I1221 19:46:53.379302   14485 ssh_runner.go:195] Run: crio --version
	I1221 19:46:53.407066   14485 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 19:46:53.408164   14485 cli_runner.go:164] Run: docker network inspect addons-734405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 19:46:53.424480   14485 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1221 19:46:53.428544   14485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 19:46:53.438103   14485 kubeadm.go:884] updating cluster {Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 19:46:53.438216   14485 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:53.438288   14485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 19:46:53.466408   14485 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 19:46:53.466435   14485 crio.go:433] Images already preloaded, skipping extraction
	I1221 19:46:53.466484   14485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 19:46:53.490134   14485 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 19:46:53.490156   14485 cache_images.go:86] Images are preloaded, skipping loading
	I1221 19:46:53.490163   14485 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1221 19:46:53.490301   14485 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-734405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 19:46:53.490400   14485 ssh_runner.go:195] Run: crio config
	I1221 19:46:53.532618   14485 cni.go:84] Creating CNI manager for ""
	I1221 19:46:53.532640   14485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 19:46:53.532656   14485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 19:46:53.532682   14485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-734405 NodeName:addons-734405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 19:46:53.532827   14485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-734405"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 19:46:53.532901   14485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 19:46:53.540663   14485 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 19:46:53.540715   14485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 19:46:53.547896   14485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1221 19:46:53.559635   14485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 19:46:53.573672   14485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1221 19:46:53.585142   14485 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1221 19:46:53.588479   14485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 19:46:53.597744   14485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:46:53.675621   14485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 19:46:53.700473   14485 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405 for IP: 192.168.49.2
	I1221 19:46:53.700498   14485 certs.go:195] generating shared ca certs ...
	I1221 19:46:53.700515   14485 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.700648   14485 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 19:46:53.798118   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt ...
	I1221 19:46:53.798153   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt: {Name:mk670d7a9ae2f463db74b60744ff0c0716b9481f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.798360   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key ...
	I1221 19:46:53.798376   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key: {Name:mk386ce7a21cb5370b96f28cf7c9eea7f93f736f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.798483   14485 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 19:46:53.881850   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt ...
	I1221 19:46:53.881878   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt: {Name:mk2a09c52952c55f436663f02992211eb851389c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.882068   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key ...
	I1221 19:46:53.882090   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key: {Name:mka668d20d09552540510629dda9e7183fc65f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.882193   14485 certs.go:257] generating profile certs ...
	I1221 19:46:53.882287   14485 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.key
	I1221 19:46:53.882306   14485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt with IP's: []
	I1221 19:46:53.949191   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt ...
	I1221 19:46:53.949218   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: {Name:mk437ac45795a9ed2517fed6abf64052104e2d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.949415   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.key ...
	I1221 19:46:53.949432   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.key: {Name:mk9c09a3d061b3b4e9b040df0f0accdecc9a4b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.949538   14485 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92
	I1221 19:46:53.949567   14485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1221 19:46:54.027546   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92 ...
	I1221 19:46:54.027574   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92: {Name:mk65654c4f0d07db51693ce8d6fc85c1eb412fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.027773   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92 ...
	I1221 19:46:54.027789   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92: {Name:mk02396785b2aed0fb1b15f2b0e09b14f6971ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.027892   14485 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt
	I1221 19:46:54.028014   14485 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key
	I1221 19:46:54.028097   14485 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key
	I1221 19:46:54.028122   14485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt with IP's: []
	I1221 19:46:54.112462   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt ...
	I1221 19:46:54.112492   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt: {Name:mk86548759bdc0f34bd53e7dc810bdaf3f116117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.112677   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key ...
	I1221 19:46:54.112700   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key: {Name:mk480430c5e33c65911aa0287b165fd3685694cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.112938   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 19:46:54.112986   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 19:46:54.113024   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 19:46:54.113052   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 19:46:54.113695   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 19:46:54.131466   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 19:46:54.147643   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 19:46:54.163435   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 19:46:54.179184   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1221 19:46:54.195249   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 19:46:54.211184   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 19:46:54.227013   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 19:46:54.242711   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 19:46:54.260120   14485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 19:46:54.271582   14485 ssh_runner.go:195] Run: openssl version
	I1221 19:46:54.277207   14485 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.283919   14485 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 19:46:54.292635   14485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.295875   14485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.295918   14485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.329879   14485 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 19:46:54.337475   14485 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 19:46:54.344331   14485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 19:46:54.347736   14485 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 19:46:54.347781   14485 kubeadm.go:401] StartCluster: {Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:46:54.347861   14485 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:46:54.347900   14485 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:46:54.372488   14485 cri.go:96] found id: ""
	I1221 19:46:54.372545   14485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 19:46:54.379908   14485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 19:46:54.387709   14485 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 19:46:54.387756   14485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 19:46:54.394754   14485 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 19:46:54.394773   14485 kubeadm.go:158] found existing configuration files:
	
	I1221 19:46:54.394804   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 19:46:54.402155   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 19:46:54.402204   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 19:46:54.408760   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 19:46:54.415268   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 19:46:54.415312   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 19:46:54.421894   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 19:46:54.428420   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 19:46:54.428460   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 19:46:54.434952   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 19:46:54.441795   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 19:46:54.441844   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 19:46:54.448618   14485 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 19:46:54.484709   14485 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1221 19:46:54.484816   14485 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 19:46:54.503707   14485 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1221 19:46:54.503798   14485 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1221 19:46:54.503850   14485 kubeadm.go:319] OS: Linux
	I1221 19:46:54.503921   14485 kubeadm.go:319] CGROUPS_CPU: enabled
	I1221 19:46:54.503966   14485 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1221 19:46:54.504006   14485 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1221 19:46:54.504045   14485 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1221 19:46:54.504085   14485 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1221 19:46:54.504150   14485 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1221 19:46:54.504248   14485 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1221 19:46:54.504325   14485 kubeadm.go:319] CGROUPS_IO: enabled
	I1221 19:46:54.558098   14485 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 19:46:54.558260   14485 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 19:46:54.558374   14485 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 19:46:54.565166   14485 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 19:46:54.566964   14485 out.go:252]   - Generating certificates and keys ...
	I1221 19:46:54.567036   14485 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 19:46:54.567112   14485 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 19:46:54.644803   14485 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 19:46:55.164760   14485 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 19:46:55.577189   14485 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 19:46:55.876457   14485 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 19:46:56.144051   14485 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 19:46:56.144167   14485 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-734405 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 19:46:56.375892   14485 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 19:46:56.376031   14485 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-734405 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 19:46:56.541731   14485 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 19:46:56.827606   14485 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 19:46:56.934599   14485 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 19:46:56.934669   14485 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 19:46:57.225363   14485 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 19:46:57.448883   14485 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 19:46:57.548595   14485 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 19:46:57.692930   14485 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 19:46:58.130796   14485 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 19:46:58.131248   14485 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 19:46:58.134555   14485 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 19:46:58.135745   14485 out.go:252]   - Booting up control plane ...
	I1221 19:46:58.135826   14485 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 19:46:58.135897   14485 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 19:46:58.136611   14485 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 19:46:58.163581   14485 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 19:46:58.163696   14485 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 19:46:58.169678   14485 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 19:46:58.169887   14485 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 19:46:58.169948   14485 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 19:46:58.266281   14485 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 19:46:58.266445   14485 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 19:46:59.267856   14485 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001710647s
	I1221 19:46:59.270760   14485 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 19:46:59.270876   14485 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1221 19:46:59.270983   14485 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 19:46:59.271107   14485 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 19:47:00.849691   14485 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.575766665s
	I1221 19:47:00.868186   14485 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.597348843s
	I1221 19:47:02.772350   14485 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501521761s
	I1221 19:47:02.787497   14485 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 19:47:02.796654   14485 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 19:47:02.804306   14485 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 19:47:02.804479   14485 kubeadm.go:319] [mark-control-plane] Marking the node addons-734405 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 19:47:02.811401   14485 kubeadm.go:319] [bootstrap-token] Using token: ah16bj.w0eka582y48hwab4
	I1221 19:47:02.812668   14485 out.go:252]   - Configuring RBAC rules ...
	I1221 19:47:02.812816   14485 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 19:47:02.815239   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 19:47:02.819342   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 19:47:02.821359   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 19:47:02.824192   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 19:47:02.826150   14485 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 19:47:03.177340   14485 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 19:47:03.591300   14485 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1221 19:47:04.176981   14485 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1221 19:47:04.177691   14485 kubeadm.go:319] 
	I1221 19:47:04.177785   14485 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1221 19:47:04.177804   14485 kubeadm.go:319] 
	I1221 19:47:04.177894   14485 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1221 19:47:04.177912   14485 kubeadm.go:319] 
	I1221 19:47:04.177950   14485 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1221 19:47:04.178035   14485 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 19:47:04.178078   14485 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 19:47:04.178103   14485 kubeadm.go:319] 
	I1221 19:47:04.178191   14485 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1221 19:47:04.178200   14485 kubeadm.go:319] 
	I1221 19:47:04.178295   14485 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 19:47:04.178304   14485 kubeadm.go:319] 
	I1221 19:47:04.178379   14485 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1221 19:47:04.178492   14485 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 19:47:04.178586   14485 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 19:47:04.178600   14485 kubeadm.go:319] 
	I1221 19:47:04.178702   14485 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 19:47:04.178801   14485 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1221 19:47:04.178813   14485 kubeadm.go:319] 
	I1221 19:47:04.178915   14485 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ah16bj.w0eka582y48hwab4 \
	I1221 19:47:04.179057   14485 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 \
	I1221 19:47:04.179090   14485 kubeadm.go:319] 	--control-plane 
	I1221 19:47:04.179098   14485 kubeadm.go:319] 
	I1221 19:47:04.179244   14485 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1221 19:47:04.179263   14485 kubeadm.go:319] 
	I1221 19:47:04.179340   14485 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ah16bj.w0eka582y48hwab4 \
	I1221 19:47:04.179481   14485 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 
	I1221 19:47:04.180859   14485 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 19:47:04.180993   14485 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 19:47:04.181014   14485 cni.go:84] Creating CNI manager for ""
	I1221 19:47:04.181025   14485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 19:47:04.183169   14485 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1221 19:47:04.184248   14485 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 19:47:04.188324   14485 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1221 19:47:04.188341   14485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1221 19:47:04.200569   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 19:47:04.391219   14485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 19:47:04.391316   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:04.391386   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-734405 minikube.k8s.io/updated_at=2025_12_21T19_47_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=addons-734405 minikube.k8s.io/primary=true
	I1221 19:47:04.400300   14485 ops.go:34] apiserver oom_adj: -16
	I1221 19:47:04.455865   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:04.956122   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:05.456346   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:05.956565   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:06.456521   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:06.956013   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:07.456314   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:07.956529   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:08.456679   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:08.956418   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:09.025099   14485 kubeadm.go:1114] duration metric: took 4.633835152s to wait for elevateKubeSystemPrivileges
	I1221 19:47:09.025137   14485 kubeadm.go:403] duration metric: took 14.677358336s to StartCluster
	I1221 19:47:09.025159   14485 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:47:09.025316   14485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:47:09.025691   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:47:09.025878   14485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 19:47:09.025913   14485 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 19:47:09.025980   14485 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1221 19:47:09.026100   14485 addons.go:70] Setting yakd=true in profile "addons-734405"
	I1221 19:47:09.026125   14485 addons.go:239] Setting addon yakd=true in "addons-734405"
	I1221 19:47:09.026141   14485 addons.go:70] Setting inspektor-gadget=true in profile "addons-734405"
	I1221 19:47:09.026160   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026166   14485 addons.go:239] Setting addon inspektor-gadget=true in "addons-734405"
	I1221 19:47:09.026185   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026199   14485 addons.go:70] Setting metrics-server=true in profile "addons-734405"
	I1221 19:47:09.026218   14485 addons.go:239] Setting addon metrics-server=true in "addons-734405"
	I1221 19:47:09.026216   14485 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:47:09.026283   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026288   14485 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-734405"
	I1221 19:47:09.026305   14485 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-734405"
	I1221 19:47:09.026317   14485 addons.go:70] Setting default-storageclass=true in profile "addons-734405"
	I1221 19:47:09.026364   14485 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-734405"
	I1221 19:47:09.026625   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026708   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026720   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026735   14485 addons.go:70] Setting volcano=true in profile "addons-734405"
	I1221 19:47:09.026749   14485 addons.go:239] Setting addon volcano=true in "addons-734405"
	I1221 19:47:09.026770   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026790   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027054   14485 addons.go:70] Setting ingress=true in profile "addons-734405"
	I1221 19:47:09.027085   14485 addons.go:239] Setting addon ingress=true in "addons-734405"
	I1221 19:47:09.027092   14485 addons.go:70] Setting gcp-auth=true in profile "addons-734405"
	I1221 19:47:09.027115   14485 mustload.go:66] Loading cluster: addons-734405
	I1221 19:47:09.027120   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.027174   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027317   14485 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:47:09.027378   14485 addons.go:70] Setting storage-provisioner=true in profile "addons-734405"
	I1221 19:47:09.027395   14485 addons.go:239] Setting addon storage-provisioner=true in "addons-734405"
	I1221 19:47:09.027419   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.027545   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027580   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027710   14485 addons.go:70] Setting cloud-spanner=true in profile "addons-734405"
	I1221 19:47:09.027757   14485 addons.go:239] Setting addon cloud-spanner=true in "addons-734405"
	I1221 19:47:09.027795   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026720   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027887   14485 addons.go:70] Setting ingress-dns=true in profile "addons-734405"
	I1221 19:47:09.028388   14485 addons.go:239] Setting addon ingress-dns=true in "addons-734405"
	I1221 19:47:09.028434   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.028922   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026177   14485 addons.go:70] Setting registry-creds=true in profile "addons-734405"
	I1221 19:47:09.029786   14485 addons.go:239] Setting addon registry-creds=true in "addons-734405"
	I1221 19:47:09.029815   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.030291   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027958   14485 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-734405"
	I1221 19:47:09.030626   14485 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-734405"
	I1221 19:47:09.030654   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.033257   14485 out.go:179] * Verifying Kubernetes components...
	I1221 19:47:09.034515   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.034713   14485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:47:09.027972   14485 addons.go:70] Setting volumesnapshots=true in profile "addons-734405"
	I1221 19:47:09.034983   14485 addons.go:239] Setting addon volumesnapshots=true in "addons-734405"
	I1221 19:47:09.035014   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.028071   14485 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-734405"
	I1221 19:47:09.035349   14485 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-734405"
	I1221 19:47:09.035382   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.028108   14485 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-734405"
	I1221 19:47:09.035426   14485 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-734405"
	I1221 19:47:09.035454   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.035515   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.028129   14485 addons.go:70] Setting registry=true in profile "addons-734405"
	I1221 19:47:09.036260   14485 addons.go:239] Setting addon registry=true in "addons-734405"
	I1221 19:47:09.036290   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.036747   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039561   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039596   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039811   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039844   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.081833   14485 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1221 19:47:09.083247   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1221 19:47:09.083285   14485 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1221 19:47:09.083365   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.100008   14485 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-734405"
	I1221 19:47:09.101560   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.101486   14485 addons.go:239] Setting addon default-storageclass=true in "addons-734405"
	I1221 19:47:09.101893   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.102265   14485 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1221 19:47:09.105361   14485 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1221 19:47:09.106067   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.106165   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.106607   14485 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1221 19:47:09.106622   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1221 19:47:09.106666   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.109156   14485 out.go:179]   - Using image docker.io/registry:3.0.0
	I1221 19:47:09.109497   14485 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1221 19:47:09.113091   14485 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1221 19:47:09.113110   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1221 19:47:09.113162   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.113411   14485 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1221 19:47:09.113421   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1221 19:47:09.113463   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	W1221 19:47:09.127914   14485 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1221 19:47:09.130075   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1221 19:47:09.131512   14485 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1221 19:47:09.131660   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1221 19:47:09.131681   14485 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1221 19:47:09.131681   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.131751   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.133115   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1221 19:47:09.133134   14485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1221 19:47:09.133203   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.140927   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:09.143465   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1221 19:47:09.144575   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:09.145923   14485 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 19:47:09.145944   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1221 19:47:09.146008   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.151357   14485 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1221 19:47:09.151510   14485 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1221 19:47:09.151552   14485 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1221 19:47:09.152506   14485 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1221 19:47:09.152525   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1221 19:47:09.152582   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.151605   14485 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 19:47:09.153321   14485 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 19:47:09.153344   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1221 19:47:09.153395   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.154362   14485 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 19:47:09.154382   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 19:47:09.154428   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.154363   14485 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 19:47:09.154460   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1221 19:47:09.154535   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.180412   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.181523   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.184899   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1221 19:47:09.185498   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.187054   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1221 19:47:09.187076   14485 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1221 19:47:09.188399   14485 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1221 19:47:09.188428   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1221 19:47:09.188489   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.190886   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1221 19:47:09.198405   14485 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 19:47:09.198426   14485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 19:47:09.198525   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.202381   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.205104   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1221 19:47:09.205703   14485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 19:47:09.206806   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.207467   14485 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1221 19:47:09.207520   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1221 19:47:09.208715   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1221 19:47:09.208733   14485 out.go:179]   - Using image docker.io/busybox:stable
	I1221 19:47:09.209926   14485 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 19:47:09.212280   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1221 19:47:09.212364   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.212244   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1221 19:47:09.214046   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1221 19:47:09.215097   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.216902   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1221 19:47:09.216946   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1221 19:47:09.217020   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.218373   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.219351   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.222441   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.223581   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.235387   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.235660   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.257708   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.258412   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	W1221 19:47:09.260515   14485 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1221 19:47:09.260568   14485 retry.go:84] will retry after 200ms: ssh: handshake failed: EOF
	I1221 19:47:09.263847   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.282479   14485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 19:47:09.366852   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1221 19:47:09.367935   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1221 19:47:09.369449   14485 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1221 19:47:09.369469   14485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1221 19:47:09.374358   14485 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1221 19:47:09.374376   14485 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1221 19:47:09.374543   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1221 19:47:09.374569   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1221 19:47:09.392267   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1221 19:47:09.392294   14485 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1221 19:47:09.395646   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 19:47:09.398314   14485 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1221 19:47:09.398333   14485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1221 19:47:09.401486   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1221 19:47:09.403090   14485 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1221 19:47:09.403113   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1221 19:47:09.417422   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 19:47:09.419890   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 19:47:09.419900   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 19:47:09.423934   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1221 19:47:09.424204   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1221 19:47:09.424218   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1221 19:47:09.427439   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1221 19:47:09.427456   14485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1221 19:47:09.428273   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1221 19:47:09.428287   14485 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1221 19:47:09.429043   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 19:47:09.454500   14485 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1221 19:47:09.454587   14485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1221 19:47:09.461202   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1221 19:47:09.472327   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 19:47:09.472407   14485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1221 19:47:09.479159   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1221 19:47:09.479182   14485 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1221 19:47:09.488790   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1221 19:47:09.488813   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1221 19:47:09.507239   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1221 19:47:09.507265   14485 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1221 19:47:09.527055   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 19:47:09.541991   14485 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 19:47:09.542086   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1221 19:47:09.541991   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1221 19:47:09.542202   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1221 19:47:09.545522   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1221 19:47:09.545597   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1221 19:47:09.576525   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 19:47:09.580323   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1221 19:47:09.588699   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1221 19:47:09.588730   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1221 19:47:09.645444   14485 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1221 19:47:09.648969   14485 node_ready.go:35] waiting up to 6m0s for node "addons-734405" to be "Ready" ...
	I1221 19:47:09.656402   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1221 19:47:09.656427   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1221 19:47:09.662957   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 19:47:09.762624   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1221 19:47:09.762708   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1221 19:47:09.790761   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1221 19:47:09.790792   14485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1221 19:47:09.884200   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1221 19:47:09.884335   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1221 19:47:09.923310   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1221 19:47:09.923332   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1221 19:47:09.985482   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 19:47:09.985511   14485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1221 19:47:10.048613   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 19:47:10.163092   14485 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-734405" context rescaled to 1 replicas
	I1221 19:47:10.499832   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.132918101s)
	I1221 19:47:10.499942   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.131984382s)
	I1221 19:47:10.500011   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.104341051s)
	I1221 19:47:10.500053   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.09854521s)
	I1221 19:47:10.500097   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.082653762s)
	I1221 19:47:10.500174   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08025754s)
	I1221 19:47:10.757009   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.33708412s)
	I1221 19:47:10.757049   14485 addons.go:495] Verifying addon ingress=true in "addons-734405"
	I1221 19:47:10.757087   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.333122076s)
	I1221 19:47:10.757154   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.328066813s)
	I1221 19:47:10.757285   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.296049769s)
	I1221 19:47:10.757315   14485 addons.go:495] Verifying addon registry=true in "addons-734405"
	I1221 19:47:10.757350   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.230263728s)
	I1221 19:47:10.757374   14485 addons.go:495] Verifying addon metrics-server=true in "addons-734405"
	I1221 19:47:10.768035   14485 out.go:179] * Verifying ingress addon...
	I1221 19:47:10.768304   14485 out.go:179] * Verifying registry addon...
	I1221 19:47:10.771095   14485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1221 19:47:10.771111   14485 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1221 19:47:10.774128   14485 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 19:47:10.774146   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:10.774608   14485 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1221 19:47:10.774627   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:11.183158   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.606547368s)
	I1221 19:47:11.183235   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.602865426s)
	W1221 19:47:11.183259   14485 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 19:47:11.183273   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.520294375s)
	I1221 19:47:11.183536   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.134882072s)
	I1221 19:47:11.183569   14485 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-734405"
	I1221 19:47:11.185339   14485 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-734405 service yakd-dashboard -n yakd-dashboard
	
	I1221 19:47:11.185346   14485 out.go:179] * Verifying csi-hostpath-driver addon...
	I1221 19:47:11.188012   14485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1221 19:47:11.190652   14485 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 19:47:11.190671   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:11.291378   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:11.291539   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:11.430939   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1221 19:47:11.656708   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:11.691053   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:11.791866   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:11.791982   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:12.190817   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:12.292019   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:12.292213   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:12.690863   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:12.791980   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:12.792137   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:13.190532   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:13.273658   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:13.273825   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:13.691147   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:13.791798   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:13.791956   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:13.893214   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.462219685s)
	W1221 19:47:14.150895   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:14.191453   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:14.274010   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:14.274313   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:14.691010   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:14.791334   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:14.791529   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:15.191323   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:15.273508   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:15.273593   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:15.690760   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:15.791346   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:15.791519   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1221 19:47:16.151796   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:16.191068   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:16.274236   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:16.274357   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:16.690888   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:16.745755   14485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1221 19:47:16.745829   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:16.763211   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:16.774357   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:16.774382   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:16.864436   14485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1221 19:47:16.876236   14485 addons.go:239] Setting addon gcp-auth=true in "addons-734405"
	I1221 19:47:16.876279   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:16.876615   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:16.893629   14485 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1221 19:47:16.893708   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:16.911071   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:17.004249   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:17.005338   14485 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1221 19:47:17.006332   14485 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1221 19:47:17.006344   14485 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1221 19:47:17.018485   14485 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1221 19:47:17.018505   14485 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1221 19:47:17.030049   14485 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 19:47:17.030069   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1221 19:47:17.041455   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 19:47:17.191274   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:17.275208   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:17.275450   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:17.322944   14485 addons.go:495] Verifying addon gcp-auth=true in "addons-734405"
	I1221 19:47:17.324242   14485 out.go:179] * Verifying gcp-auth addon...
	I1221 19:47:17.325926   14485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1221 19:47:17.375521   14485 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1221 19:47:17.375543   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:17.691139   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:17.774529   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:17.774591   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:17.829199   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:18.190775   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:18.274051   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:18.274191   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:18.329047   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1221 19:47:18.651524   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:18.690937   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:18.774315   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:18.774416   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:18.828394   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:19.198580   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:19.273662   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:19.273787   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:19.328981   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:19.690637   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:19.774017   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:19.774154   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:19.827959   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:20.190791   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:20.274289   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:20.274452   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:20.328683   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1221 19:47:20.652213   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:20.690544   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:20.773727   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:20.773890   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:20.828761   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:21.190169   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:21.273358   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:21.273540   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:21.328802   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:21.690607   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:21.773739   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:21.773895   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:21.832814   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:22.152935   14485 node_ready.go:49] node "addons-734405" is "Ready"
	I1221 19:47:22.152974   14485 node_ready.go:38] duration metric: took 12.503966942s for node "addons-734405" to be "Ready" ...
	I1221 19:47:22.152992   14485 api_server.go:52] waiting for apiserver process to appear ...
	I1221 19:47:22.153048   14485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 19:47:22.172049   14485 api_server.go:72] duration metric: took 13.146096968s to wait for apiserver process to appear ...
	I1221 19:47:22.172080   14485 api_server.go:88] waiting for apiserver healthz status ...
	I1221 19:47:22.172103   14485 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1221 19:47:22.178258   14485 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1221 19:47:22.179266   14485 api_server.go:141] control plane version: v1.34.3
	I1221 19:47:22.179296   14485 api_server.go:131] duration metric: took 7.210074ms to wait for apiserver health ...
	I1221 19:47:22.179306   14485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 19:47:22.185672   14485 system_pods.go:59] 20 kube-system pods found
	I1221 19:47:22.185712   14485 system_pods.go:61] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.185724   14485 system_pods.go:61] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:22.185735   14485 system_pods.go:61] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.185744   14485 system_pods.go:61] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.185754   14485 system_pods.go:61] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.185762   14485 system_pods.go:61] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.185768   14485 system_pods.go:61] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.185774   14485 system_pods.go:61] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.185780   14485 system_pods.go:61] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.185800   14485 system_pods.go:61] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.185814   14485 system_pods.go:61] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.185819   14485 system_pods.go:61] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.185826   14485 system_pods.go:61] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.185841   14485 system_pods.go:61] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.185848   14485 system_pods.go:61] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.185905   14485 system_pods.go:61] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.185917   14485 system_pods.go:61] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.185928   14485 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.185936   14485 system_pods.go:61] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.185943   14485 system_pods.go:61] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:22.185951   14485 system_pods.go:74] duration metric: took 6.638287ms to wait for pod list to return data ...
	I1221 19:47:22.185960   14485 default_sa.go:34] waiting for default service account to be created ...
	I1221 19:47:22.188683   14485 default_sa.go:45] found service account: "default"
	I1221 19:47:22.188707   14485 default_sa.go:55] duration metric: took 2.740424ms for default service account to be created ...
	I1221 19:47:22.188717   14485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 19:47:22.283707   14485 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 19:47:22.283734   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:22.283869   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:22.283896   14485 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 19:47:22.283910   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:22.285986   14485 system_pods.go:86] 20 kube-system pods found
	I1221 19:47:22.286018   14485 system_pods.go:89] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.286028   14485 system_pods.go:89] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:22.286057   14485 system_pods.go:89] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.286062   14485 system_pods.go:89] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.286069   14485 system_pods.go:89] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.286077   14485 system_pods.go:89] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.286083   14485 system_pods.go:89] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.286087   14485 system_pods.go:89] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.286095   14485 system_pods.go:89] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.286104   14485 system_pods.go:89] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.286128   14485 system_pods.go:89] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.286135   14485 system_pods.go:89] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.286140   14485 system_pods.go:89] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.286146   14485 system_pods.go:89] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.286160   14485 system_pods.go:89] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.286165   14485 system_pods.go:89] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.286172   14485 system_pods.go:89] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.286177   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.286185   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.286201   14485 system_pods.go:89] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:22.286235   14485 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1221 19:47:22.383618   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:22.486472   14485 system_pods.go:86] 20 kube-system pods found
	I1221 19:47:22.486565   14485 system_pods.go:89] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.486579   14485 system_pods.go:89] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:22.486591   14485 system_pods.go:89] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.486599   14485 system_pods.go:89] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.486607   14485 system_pods.go:89] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.486628   14485 system_pods.go:89] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.486637   14485 system_pods.go:89] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.486643   14485 system_pods.go:89] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.486648   14485 system_pods.go:89] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.486655   14485 system_pods.go:89] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.486660   14485 system_pods.go:89] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.486665   14485 system_pods.go:89] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.486680   14485 system_pods.go:89] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.486688   14485 system_pods.go:89] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.486695   14485 system_pods.go:89] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.486711   14485 system_pods.go:89] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.486718   14485 system_pods.go:89] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.486726   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.486733   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.486741   14485 system_pods.go:89] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:22.692687   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:22.765459   14485 system_pods.go:86] 20 kube-system pods found
	I1221 19:47:22.765506   14485 system_pods.go:89] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.765515   14485 system_pods.go:89] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Running
	I1221 19:47:22.765526   14485 system_pods.go:89] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.765539   14485 system_pods.go:89] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.765553   14485 system_pods.go:89] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.765562   14485 system_pods.go:89] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.765568   14485 system_pods.go:89] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.765576   14485 system_pods.go:89] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.765582   14485 system_pods.go:89] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.765594   14485 system_pods.go:89] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.765604   14485 system_pods.go:89] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.765610   14485 system_pods.go:89] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.765622   14485 system_pods.go:89] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.765630   14485 system_pods.go:89] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.765642   14485 system_pods.go:89] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.765656   14485 system_pods.go:89] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.765668   14485 system_pods.go:89] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.765677   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.765685   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.765691   14485 system_pods.go:89] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Running
	I1221 19:47:22.765701   14485 system_pods.go:126] duration metric: took 576.976739ms to wait for k8s-apps to be running ...
	I1221 19:47:22.765710   14485 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 19:47:22.765764   14485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 19:47:22.774949   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:22.775153   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:22.783001   14485 system_svc.go:56] duration metric: took 17.284969ms WaitForService to wait for kubelet
	I1221 19:47:22.783026   14485 kubeadm.go:587] duration metric: took 13.75707992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 19:47:22.783049   14485 node_conditions.go:102] verifying NodePressure condition ...
	I1221 19:47:22.785596   14485 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 19:47:22.785623   14485 node_conditions.go:123] node cpu capacity is 8
	I1221 19:47:22.785644   14485 node_conditions.go:105] duration metric: took 2.589014ms to run NodePressure ...
	I1221 19:47:22.785657   14485 start.go:242] waiting for startup goroutines ...
	I1221 19:47:22.829399   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:23.192018   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:23.274621   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:23.274772   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:23.329512   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:23.692074   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:23.774435   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:23.774536   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:23.828363   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:24.191629   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:24.291960   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:24.292072   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:24.329812   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:24.690784   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:24.774688   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:24.774710   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:24.828965   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:25.191291   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:25.292415   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:25.292450   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:25.328448   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:25.692880   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:25.774670   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:25.774682   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:25.829123   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:26.191155   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:26.275375   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:26.275489   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:26.329025   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:26.691098   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:26.775018   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:26.775058   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:26.829667   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:27.192073   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:27.274637   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:27.274848   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:27.329610   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:27.691625   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:27.774104   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:27.774116   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:27.828933   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:28.191287   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:28.275279   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:28.275332   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:28.328723   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:28.691899   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:28.774591   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:28.774834   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:28.829313   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:29.191452   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:29.291413   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:29.291550   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:29.391645   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:29.695824   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:29.774576   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:29.774603   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:29.828575   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:30.192219   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:30.276331   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:30.276530   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:30.330172   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:30.693305   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:30.776600   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:30.777083   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:30.829545   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.191982   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:31.274946   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:31.275344   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:31.329318   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.739158   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:31.876011   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.876433   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:31.876974   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.191301   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:32.291895   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.291930   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:32.392955   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:32.691585   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:32.774394   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.774426   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:32.829253   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:33.191301   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:33.273984   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:33.274130   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:33.329277   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:33.691740   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:33.774642   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:33.774663   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:33.829105   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:34.191321   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:34.291847   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:34.291884   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:34.330631   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:34.692201   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:34.775308   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:34.775475   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:34.829093   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:35.191670   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:35.291992   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:35.292122   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:35.329367   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:35.691707   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:35.774142   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:35.774196   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:35.828051   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:36.191286   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:36.274821   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:36.274963   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:36.329591   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:36.693964   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:36.775880   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:36.776381   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:36.829537   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:37.191980   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:37.274738   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:37.274838   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:37.329913   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:37.692674   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:37.793062   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:37.793126   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:37.829764   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:38.192522   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:38.274202   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:38.274282   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:38.329509   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:38.692078   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:38.792751   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:38.793018   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:38.828939   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:39.191439   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:39.275306   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:39.277167   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:39.328964   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:39.691148   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:39.775085   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:39.775176   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:39.828981   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:40.191520   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:40.274332   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:40.274611   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:40.329287   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:40.691658   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:40.774007   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:40.774216   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:40.829215   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:41.191251   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:41.274443   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:41.274475   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:41.329107   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:41.691443   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:41.791437   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:41.791533   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:41.828458   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:42.193080   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:42.274251   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:42.274394   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:42.328599   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:42.691909   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:42.774761   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:42.775196   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:42.828994   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:43.191330   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:43.273801   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:43.273887   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:43.329988   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:43.691311   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:43.774089   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:43.774126   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:43.829686   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:44.192218   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:44.274538   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:44.274672   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:44.329278   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:44.691352   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:44.774214   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:44.774278   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:44.829771   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:45.191917   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:45.333160   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:45.333167   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:45.485984   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:45.756190   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:45.774393   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:45.774410   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:45.828995   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:46.191045   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:46.274412   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:46.274585   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:46.329175   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:46.692763   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:46.774348   14485 kapi.go:107] duration metric: took 36.003250091s to wait for kubernetes.io/minikube-addons=registry ...
	I1221 19:47:46.774982   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:46.829433   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:47.191921   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:47.274762   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:47.329211   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:47.691736   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:47.774304   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:47.828904   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:48.190773   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:48.274110   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:48.373191   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:48.690939   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:48.775333   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:48.830073   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:49.191261   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:49.273789   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:49.329651   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:49.691709   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:49.773972   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:49.829552   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:50.192270   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:50.292367   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:50.328793   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:50.693046   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:50.793698   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:50.828969   14485 kapi.go:107] duration metric: took 33.503038969s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1221 19:47:50.830434   14485 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-734405 cluster.
	I1221 19:47:50.831497   14485 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1221 19:47:50.832585   14485 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1221 19:47:51.192059   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:51.274628   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:51.692528   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:51.774306   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:52.191516   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:52.273977   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:52.691724   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:52.791858   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:53.191772   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:53.274371   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:53.690848   14485 kapi.go:107] duration metric: took 42.502832919s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1221 19:47:53.774178   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:54.275460   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:54.775892   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:55.274655   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:55.801196   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:56.274663   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:56.775134   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:57.274879   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:57.773904   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:58.274857   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:58.774000   14485 kapi.go:107] duration metric: took 48.002886953s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1221 19:47:58.775398   14485 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, ingress-dns, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, storage-provisioner-rancher, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1221 19:47:58.776573   14485 addons.go:530] duration metric: took 49.750592938s for enable addons: enabled=[inspektor-gadget registry-creds ingress-dns amd-gpu-device-plugin nvidia-device-plugin storage-provisioner cloud-spanner metrics-server storage-provisioner-rancher yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1221 19:47:58.776619   14485 start.go:247] waiting for cluster config update ...
	I1221 19:47:58.776643   14485 start.go:256] writing updated cluster config ...
	I1221 19:47:58.776951   14485 ssh_runner.go:195] Run: rm -f paused
	I1221 19:47:58.780731   14485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 19:47:58.783327   14485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wq5c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.786850   14485 pod_ready.go:94] pod "coredns-66bc5c9577-wq5c4" is "Ready"
	I1221 19:47:58.786871   14485 pod_ready.go:86] duration metric: took 3.525494ms for pod "coredns-66bc5c9577-wq5c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.788511   14485 pod_ready.go:83] waiting for pod "etcd-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.791747   14485 pod_ready.go:94] pod "etcd-addons-734405" is "Ready"
	I1221 19:47:58.791764   14485 pod_ready.go:86] duration metric: took 3.233235ms for pod "etcd-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.793339   14485 pod_ready.go:83] waiting for pod "kube-apiserver-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.796295   14485 pod_ready.go:94] pod "kube-apiserver-addons-734405" is "Ready"
	I1221 19:47:58.796315   14485 pod_ready.go:86] duration metric: took 2.956894ms for pod "kube-apiserver-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.797789   14485 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.185268   14485 pod_ready.go:94] pod "kube-controller-manager-addons-734405" is "Ready"
	I1221 19:47:59.185302   14485 pod_ready.go:86] duration metric: took 387.49367ms for pod "kube-controller-manager-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.383895   14485 pod_ready.go:83] waiting for pod "kube-proxy-w42q9" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.784456   14485 pod_ready.go:94] pod "kube-proxy-w42q9" is "Ready"
	I1221 19:47:59.784482   14485 pod_ready.go:86] duration metric: took 400.557638ms for pod "kube-proxy-w42q9" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.984252   14485 pod_ready.go:83] waiting for pod "kube-scheduler-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:00.384141   14485 pod_ready.go:94] pod "kube-scheduler-addons-734405" is "Ready"
	I1221 19:48:00.384174   14485 pod_ready.go:86] duration metric: took 399.891025ms for pod "kube-scheduler-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:00.384189   14485 pod_ready.go:40] duration metric: took 1.603427829s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 19:48:00.427073   14485 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 19:48:00.428707   14485 out.go:179] * Done! kubectl is now configured to use "addons-734405" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.120410801Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-zwl54/POD" id=8243a7d5-adf0-4e08-821b-533be0193fb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.120488168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.12657879Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zwl54 Namespace:default ID:18a019010dca567573bbbd5c57fefbe32e46240c1e4a485d9f28535fe4769900 UID:71c17498-5a25-493d-a69d-bef41e224512 NetNS:/var/run/netns/2f66c124-a961-49f5-8383-9aeaafae9f88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540ab0}] Aliases:map[]}"
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.126607213Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-zwl54 to CNI network \"kindnet\" (type=ptp)"
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.136855762Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zwl54 Namespace:default ID:18a019010dca567573bbbd5c57fefbe32e46240c1e4a485d9f28535fe4769900 UID:71c17498-5a25-493d-a69d-bef41e224512 NetNS:/var/run/netns/2f66c124-a961-49f5-8383-9aeaafae9f88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540ab0}] Aliases:map[]}"
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.137011428Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-zwl54 for CNI network kindnet (type=ptp)"
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.137882506Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.138769177Z" level=info msg="Ran pod sandbox 18a019010dca567573bbbd5c57fefbe32e46240c1e4a485d9f28535fe4769900 with infra container: default/hello-world-app-5d498dc89-zwl54/POD" id=8243a7d5-adf0-4e08-821b-533be0193fb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.139924229Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=095e7fe7-508b-4daa-8c82-6ef83d268bc2 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.140081358Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=095e7fe7-508b-4daa-8c82-6ef83d268bc2 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.140124205Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=095e7fe7-508b-4daa-8c82-6ef83d268bc2 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.140744776Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=508b2885-aab2-433b-813a-f1351809d801 name=/runtime.v1.ImageService/PullImage
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.156649475Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.995518557Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=508b2885-aab2-433b-813a-f1351809d801 name=/runtime.v1.ImageService/PullImage
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.995993385Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0df6441e-a4b2-4568-b67d-c6f774c98d4c name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:50:52 addons-734405 crio[774]: time="2025-12-21T19:50:52.997266873Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b805d771-d5aa-4992-a4b0-91dee4917f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.000395064Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-zwl54/hello-world-app" id=1d490c3b-dc51-4c11-a74a-5d372289faff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.000500171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.006722649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.006875275Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/973d75200d1dbdbb845760d10bd250821e3c42c3e1ab2f8d0978cd3cdbbda281/merged/etc/passwd: no such file or directory"
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.006897007Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/973d75200d1dbdbb845760d10bd250821e3c42c3e1ab2f8d0978cd3cdbbda281/merged/etc/group: no such file or directory"
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.007099423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.039975963Z" level=info msg="Created container a50ad2f6779615ae076026a86b7c464e6f07dc0d3f15812eb60cd10773f496cf: default/hello-world-app-5d498dc89-zwl54/hello-world-app" id=1d490c3b-dc51-4c11-a74a-5d372289faff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.040610136Z" level=info msg="Starting container: a50ad2f6779615ae076026a86b7c464e6f07dc0d3f15812eb60cd10773f496cf" id=5d64d1d8-0f3a-494b-922a-6f13e07c9f32 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 19:50:53 addons-734405 crio[774]: time="2025-12-21T19:50:53.042371748Z" level=info msg="Started container" PID=9863 containerID=a50ad2f6779615ae076026a86b7c464e6f07dc0d3f15812eb60cd10773f496cf description=default/hello-world-app-5d498dc89-zwl54/hello-world-app id=5d64d1d8-0f3a-494b-922a-6f13e07c9f32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18a019010dca567573bbbd5c57fefbe32e46240c1e4a485d9f28535fe4769900
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a50ad2f677961       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   18a019010dca5       hello-world-app-5d498dc89-zwl54             default
	fd7f7dd0fdf87       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   2a7511c12a7c8       registry-creds-764b6fb674-8smmr             kube-system
	eaeb84202c06c       public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c                                           2 minutes ago            Running             nginx                                    0                   741568609cd44       nginx                                       default
	78e218eaf8072       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   239d5ab16a4a7       busybox                                     default
	d1cc7252170ad       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   d239d101dcfd8       ingress-nginx-controller-85d4c799dd-dmwnv   ingress-nginx
	8193c5ae3e9a0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	676b24cbeac1b       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	ae4f670583b4b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	4da1a1c1615a1       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	83cd8b34dd2bc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	cc1211cf07843       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   8ef659878b618       gcp-auth-78565c9fb4-f5n74                   gcp-auth
	931b6bedd64cc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            3 minutes ago            Running             gadget                                   0                   69137a0adeada       gadget-lvc5c                                gadget
	9154e33c67350       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   9a93df1c76584       registry-proxy-5xdvv                        kube-system
	ec5953c7bde6a       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              3 minutes ago            Running             yakd                                     0                   3ba81922d56f6       yakd-dashboard-6654c87f9b-lz7ml             yakd-dashboard
	091d53cfa2f7b       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             3 minutes ago            Exited              patch                                    2                   a85101bc1cf84       ingress-nginx-admission-patch-gp4pn         ingress-nginx
	fc4218afd9e59       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   ff7dc378427a8       snapshot-controller-7d9fbc56b8-w6gfv        kube-system
	2c76399e64dc0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   267ef654d5296       ingress-nginx-admission-create-r2l6g        ingress-nginx
	d37800c5570f8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   26e3f02e5e27d       amd-gpu-device-plugin-s628b                 kube-system
	5acc717deb7f9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   b9eb8b9eca650       snapshot-controller-7d9fbc56b8-fn24t        kube-system
	737d21aac5c57       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   52dc869c667e0       csi-hostpath-attacher-0                     kube-system
	749cd4daccd50       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	c99f35ca87dcf       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   9355ac06d6fab       nvidia-device-plugin-daemonset-jlq7q        kube-system
	33aa662cb1f0b       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   0c1a8c749af66       csi-hostpath-resizer-0                      kube-system
	d7348a5e060fd       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   93449b6dc50bc       kube-ingress-dns-minikube                   kube-system
	5afbb455983c1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   3929024e9b89b       local-path-provisioner-648f6765c9-csc7x     local-path-storage
	8a26f7364135b       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               3 minutes ago            Running             cloud-spanner-emulator                   0                   1d5f3fc2299bb       cloud-spanner-emulator-85df47b6f4-ltblw     default
	abf23e714a098       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   7a298569ec09f       registry-6b586f9694-5p6mn                   kube-system
	54e47bcdd2cec       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   7d0f5f70d0808       metrics-server-85b7d694d7-gzztd             kube-system
	d6093c1a7f9f6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   8dfe3a2ba5b1a       coredns-66bc5c9577-wq5c4                    kube-system
	23a6a681dd961       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   207b293bc67d7       storage-provisioner                         kube-system
	e631c821d8606       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           3 minutes ago            Running             kindnet-cni                              0                   6541b47b9f4b3       kindnet-z9kv6                               kube-system
	026bbd1e79a4d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             3 minutes ago            Running             kube-proxy                               0                   7008e56743e7d       kube-proxy-w42q9                            kube-system
	e8e92c3f6bb0c       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             3 minutes ago            Running             kube-scheduler                           0                   a9f4091a0c657       kube-scheduler-addons-734405                kube-system
	8989e50092359       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             3 minutes ago            Running             kube-controller-manager                  0                   0e0d1e454a339       kube-controller-manager-addons-734405       kube-system
	5cbca605ea4a5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             3 minutes ago            Running             kube-apiserver                           0                   3b53eb1af2ac4       kube-apiserver-addons-734405                kube-system
	a790cf4635e7c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago            Running             etcd                                     0                   c8ef5c03af77c       etcd-addons-734405                          kube-system
	
	
	==> coredns [d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2] <==
	[INFO] 10.244.0.21:54500 - 22055 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101893s
	[INFO] 10.244.0.21:48975 - 12677 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004871059s
	[INFO] 10.244.0.21:60240 - 37351 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005335674s
	[INFO] 10.244.0.21:56562 - 43127 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005155105s
	[INFO] 10.244.0.21:55771 - 16484 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005293981s
	[INFO] 10.244.0.21:57930 - 7089 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004541723s
	[INFO] 10.244.0.21:34421 - 50285 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004697359s
	[INFO] 10.244.0.21:40060 - 58720 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000797793s
	[INFO] 10.244.0.21:60413 - 33616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001150479s
	[INFO] 10.244.0.26:38809 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000253206s
	[INFO] 10.244.0.26:42534 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000166196s
	[INFO] 10.244.0.31:51654 - 12646 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000201478s
	[INFO] 10.244.0.31:50634 - 58251 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000288847s
	[INFO] 10.244.0.31:48095 - 9368 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00012446s
	[INFO] 10.244.0.31:52813 - 33829 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00016791s
	[INFO] 10.244.0.31:37941 - 44740 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000115774s
	[INFO] 10.244.0.31:43376 - 59461 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000182368s
	[INFO] 10.244.0.31:54349 - 36318 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004854268s
	[INFO] 10.244.0.31:48059 - 32081 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006588662s
	[INFO] 10.244.0.31:32770 - 60006 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00474505s
	[INFO] 10.244.0.31:51255 - 37987 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005387298s
	[INFO] 10.244.0.31:57587 - 51619 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004774679s
	[INFO] 10.244.0.31:57161 - 47248 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005644343s
	[INFO] 10.244.0.31:48796 - 4688 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001561861s
	[INFO] 10.244.0.31:49311 - 58102 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001645176s
	
	
	==> describe nodes <==
	Name:               addons-734405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-734405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=addons-734405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T19_47_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-734405
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-734405"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 19:47:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-734405
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 19:50:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 19:50:07 +0000   Sun, 21 Dec 2025 19:46:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 19:50:07 +0000   Sun, 21 Dec 2025 19:46:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 19:50:07 +0000   Sun, 21 Dec 2025 19:46:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 19:50:07 +0000   Sun, 21 Dec 2025 19:47:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-734405
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                74e3dc80-d0bb-45e6-9642-dc97dff8bb7b
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  default                     cloud-spanner-emulator-85df47b6f4-ltblw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  default                     hello-world-app-5d498dc89-zwl54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-lvc5c                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  gcp-auth                    gcp-auth-78565c9fb4-f5n74                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-dmwnv    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m43s
	  kube-system                 amd-gpu-device-plugin-s628b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 coredns-66bc5c9577-wq5c4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m44s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 csi-hostpathplugin-9tblq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 etcd-addons-734405                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m51s
	  kube-system                 kindnet-z9kv6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m45s
	  kube-system                 kube-apiserver-addons-734405                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 kube-controller-manager-addons-734405        200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 kube-proxy-w42q9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 kube-scheduler-addons-734405                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 metrics-server-85b7d694d7-gzztd              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m43s
	  kube-system                 nvidia-device-plugin-daemonset-jlq7q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 registry-6b586f9694-5p6mn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 registry-creds-764b6fb674-8smmr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 registry-proxy-5xdvv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 snapshot-controller-7d9fbc56b8-fn24t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 snapshot-controller-7d9fbc56b8-w6gfv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  local-path-storage          local-path-provisioner-648f6765c9-csc7x      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-lz7ml              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m43s  kube-proxy       
	  Normal  Starting                 3m50s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s  kubelet          Node addons-734405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s  kubelet          Node addons-734405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s  kubelet          Node addons-734405 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m46s  node-controller  Node addons-734405 event: Registered Node addons-734405 in Controller
	  Normal  NodeReady                3m32s  kubelet          Node addons-734405 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.085350] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025061] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.894686] kauditd_printk_skb: 47 callbacks suppressed
	[Dec21 19:48] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.000151] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023871] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023899] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +2.047760] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +4.031573] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +8.255179] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 19:49] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[ +32.252695] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	
	
	==> etcd [a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8] <==
	{"level":"info","ts":"2025-12-21T19:47:34.031399Z","caller":"traceutil/trace.go:172","msg":"trace[1645263604] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"124.795067ms","start":"2025-12-21T19:47:33.906587Z","end":"2025-12-21T19:47:34.031382Z","steps":["trace[1645263604] 'process raft request'  (duration: 124.666179ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:34.051726Z","caller":"traceutil/trace.go:172","msg":"trace[1074436700] transaction","detail":"{read_only:false; response_revision:1007; number_of_response:1; }","duration":"139.361329ms","start":"2025-12-21T19:47:33.912353Z","end":"2025-12-21T19:47:34.051714Z","steps":["trace[1074436700] 'process raft request'  (duration: 139.28643ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:47:37.876386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:37.883766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:37.895436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:37.902048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:45.484399Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.60286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:45.484432Z","caller":"traceutil/trace.go:172","msg":"trace[58265122] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"221.616683ms","start":"2025-12-21T19:47:45.262797Z","end":"2025-12-21T19:47:45.484414Z","steps":["trace[58265122] 'process raft request'  (duration: 158.815668ms)","trace[58265122] 'compare'  (duration: 62.706435ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T19:47:45.484468Z","caller":"traceutil/trace.go:172","msg":"trace[2021833767] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1104; }","duration":"156.689768ms","start":"2025-12-21T19:47:45.327765Z","end":"2025-12-21T19:47:45.484454Z","steps":["trace[2021833767] 'agreement among raft nodes before linearized reading'  (duration: 93.823611ms)","trace[2021833767] 'range keys from in-memory index tree'  (duration: 62.751705ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T19:47:45.578095Z","caller":"traceutil/trace.go:172","msg":"trace[845078922] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1127; }","duration":"156.50217ms","start":"2025-12-21T19:47:45.421571Z","end":"2025-12-21T19:47:45.578073Z","steps":["trace[845078922] 'read index received'  (duration: 156.494889ms)","trace[845078922] 'applied index is now lower than readState.Index'  (duration: 6.471µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.578196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.877517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"warn","ts":"2025-12-21T19:47:45.578210Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.595517ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:45.578257Z","caller":"traceutil/trace.go:172","msg":"trace[398374212] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1105; }","duration":"221.653683ms","start":"2025-12-21T19:47:45.356598Z","end":"2025-12-21T19:47:45.578251Z","steps":["trace[398374212] 'agreement among raft nodes before linearized reading'  (duration: 221.573883ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.578259Z","caller":"traceutil/trace.go:172","msg":"trace[2024770133] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1105; }","duration":"206.923164ms","start":"2025-12-21T19:47:45.371297Z","end":"2025-12-21T19:47:45.578220Z","steps":["trace[2024770133] 'agreement among raft nodes before linearized reading'  (duration: 206.795685ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.578322Z","caller":"traceutil/trace.go:172","msg":"trace[47854218] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"240.869293ms","start":"2025-12-21T19:47:45.337442Z","end":"2025-12-21T19:47:45.578312Z","steps":["trace[47854218] 'process raft request'  (duration: 240.759428ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.698456Z","caller":"traceutil/trace.go:172","msg":"trace[495229169] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"120.281094ms","start":"2025-12-21T19:47:45.578156Z","end":"2025-12-21T19:47:45.698437Z","steps":["trace[495229169] 'read index received'  (duration: 120.274591ms)","trace[495229169] 'applied index is now lower than readState.Index'  (duration: 5.27µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.748319Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.965733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:45.748375Z","caller":"traceutil/trace.go:172","msg":"trace[1384059515] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1106; }","duration":"312.030752ms","start":"2025-12-21T19:47:45.436333Z","end":"2025-12-21T19:47:45.748364Z","steps":["trace[1384059515] 'agreement among raft nodes before linearized reading'  (duration: 262.187251ms)","trace[1384059515] 'range keys from in-memory index tree'  (duration: 49.754993ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.748400Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:47:45.436316Z","time spent":"312.079127ms","remote":"127.0.0.1:39486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-21T19:47:45.748472Z","caller":"traceutil/trace.go:172","msg":"trace[553710565] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"259.084644ms","start":"2025-12-21T19:47:45.489370Z","end":"2025-12-21T19:47:45.748454Z","steps":["trace[553710565] 'process raft request'  (duration: 209.097207ms)","trace[553710565] 'compare'  (duration: 49.887204ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.754959Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.96993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-gp4pn\" limit:1 ","response":"range_response_count:1 size:4944"}
	{"level":"info","ts":"2025-12-21T19:47:45.755006Z","caller":"traceutil/trace.go:172","msg":"trace[1674152388] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-gp4pn; range_end:; response_count:1; response_revision:1107; }","duration":"146.02389ms","start":"2025-12-21T19:47:45.608971Z","end":"2025-12-21T19:47:45.754995Z","steps":["trace[1674152388] 'agreement among raft nodes before linearized reading'  (duration: 145.897545ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.755046Z","caller":"traceutil/trace.go:172","msg":"trace[769583568] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"128.142512ms","start":"2025-12-21T19:47:45.626892Z","end":"2025-12-21T19:47:45.755035Z","steps":["trace[769583568] 'process raft request'  (duration: 128.10879ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.755084Z","caller":"traceutil/trace.go:172","msg":"trace[2017445893] transaction","detail":"{read_only:false; response_revision:1109; number_of_response:1; }","duration":"172.935164ms","start":"2025-12-21T19:47:45.582135Z","end":"2025-12-21T19:47:45.755070Z","steps":["trace[2017445893] 'process raft request'  (duration: 172.829856ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.755097Z","caller":"traceutil/trace.go:172","msg":"trace[2117487475] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"173.654174ms","start":"2025-12-21T19:47:45.581430Z","end":"2025-12-21T19:47:45.755084Z","steps":["trace[2117487475] 'process raft request'  (duration: 173.448791ms)"],"step_count":1}
	
	
	==> gcp-auth [cc1211cf078437dc18f5b7b00cbb8a6afea2bfe1bc5def1261033d9805cf3fd7] <==
	2025/12/21 19:47:49 GCP Auth Webhook started!
	2025/12/21 19:48:00 Ready to marshal response ...
	2025/12/21 19:48:00 Ready to write response ...
	2025/12/21 19:48:00 Ready to marshal response ...
	2025/12/21 19:48:00 Ready to write response ...
	2025/12/21 19:48:00 Ready to marshal response ...
	2025/12/21 19:48:00 Ready to write response ...
	2025/12/21 19:48:15 Ready to marshal response ...
	2025/12/21 19:48:15 Ready to write response ...
	2025/12/21 19:48:16 Ready to marshal response ...
	2025/12/21 19:48:16 Ready to write response ...
	2025/12/21 19:48:20 Ready to marshal response ...
	2025/12/21 19:48:20 Ready to write response ...
	2025/12/21 19:48:24 Ready to marshal response ...
	2025/12/21 19:48:24 Ready to write response ...
	2025/12/21 19:48:25 Ready to marshal response ...
	2025/12/21 19:48:25 Ready to write response ...
	2025/12/21 19:48:28 Ready to marshal response ...
	2025/12/21 19:48:28 Ready to write response ...
	2025/12/21 19:48:49 Ready to marshal response ...
	2025/12/21 19:48:49 Ready to write response ...
	2025/12/21 19:50:51 Ready to marshal response ...
	2025/12/21 19:50:51 Ready to write response ...
	
	
	==> kernel <==
	 19:50:53 up 33 min,  0 user,  load average: 0.60, 0.64, 0.31
	Linux addons-734405 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f] <==
	I1221 19:48:51.691736       1 main.go:301] handling current node
	I1221 19:49:01.694289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:49:01.694320       1 main.go:301] handling current node
	I1221 19:49:11.691805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:49:11.691834       1 main.go:301] handling current node
	I1221 19:49:21.691422       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:49:21.691452       1 main.go:301] handling current node
	I1221 19:49:31.691336       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:49:31.691370       1 main.go:301] handling current node
	I1221 19:49:41.691341       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:49:41.691377       1 main.go:301] handling current node
	I1221 19:49:51.698290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:49:51.698320       1 main.go:301] handling current node
	I1221 19:50:01.699295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:50:01.699324       1 main.go:301] handling current node
	I1221 19:50:11.691291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:50:11.691325       1 main.go:301] handling current node
	I1221 19:50:21.698468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:50:21.698500       1 main.go:301] handling current node
	I1221 19:50:31.698211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:50:31.698269       1 main.go:301] handling current node
	I1221 19:50:41.691353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:50:41.691393       1 main.go:301] handling current node
	I1221 19:50:51.698687       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:50:51.698715       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce] <==
	E1221 19:47:25.542182       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.582919       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.664578       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.826072       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:26.146950       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	W1221 19:47:26.519004       1 handler_proxy.go:99] no RequestInfo found in the context
	W1221 19:47:26.519015       1 handler_proxy.go:99] no RequestInfo found in the context
	E1221 19:47:26.519047       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1221 19:47:26.519064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1221 19:47:26.519098       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1221 19:47:26.520256       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1221 19:47:26.815125       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1221 19:47:37.876327       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 19:47:37.883736       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 19:47:37.895370       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 19:47:37.902021       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1221 19:48:09.066062       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41288: use of closed network connection
	E1221 19:48:09.204950       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41302: use of closed network connection
	I1221 19:48:25.074605       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1221 19:48:25.248216       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.25.138"}
	I1221 19:48:37.186023       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1221 19:50:51.885567       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.37.209"}
	
	
	==> kube-controller-manager [8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2] <==
	I1221 19:47:07.860708       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 19:47:07.860733       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 19:47:07.860760       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1221 19:47:07.860777       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1221 19:47:07.860811       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1221 19:47:07.860762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1221 19:47:07.860812       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1221 19:47:07.860817       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1221 19:47:07.860780       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1221 19:47:07.863450       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:47:07.866692       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:47:07.872938       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 19:47:07.872994       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 19:47:07.873017       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 19:47:07.873022       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 19:47:07.873026       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 19:47:07.878136       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-734405" podCIDRs=["10.244.0.0/24"]
	I1221 19:47:07.879102       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1221 19:47:10.258187       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1221 19:47:22.861476       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1221 19:47:37.869348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1221 19:47:37.869435       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1221 19:47:37.889053       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1221 19:47:37.970012       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:47:37.990189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe] <==
	I1221 19:47:09.017510       1 server_linux.go:53] "Using iptables proxy"
	I1221 19:47:09.160962       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 19:47:09.262546       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 19:47:09.262585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1221 19:47:09.262666       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 19:47:09.332533       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 19:47:09.332623       1 server_linux.go:132] "Using iptables Proxier"
	I1221 19:47:09.339695       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 19:47:09.345297       1 server.go:527] "Version info" version="v1.34.3"
	I1221 19:47:09.345430       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 19:47:09.346946       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 19:47:09.347415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 19:47:09.347454       1 config.go:200] "Starting service config controller"
	I1221 19:47:09.347461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 19:47:09.347478       1 config.go:106] "Starting endpoint slice config controller"
	I1221 19:47:09.347483       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 19:47:09.347075       1 config.go:309] "Starting node config controller"
	I1221 19:47:09.347501       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 19:47:09.347507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 19:47:09.448278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 19:47:09.448972       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 19:47:09.449006       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2] <==
	E1221 19:47:00.865580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 19:47:00.865610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 19:47:00.865644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 19:47:00.865678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 19:47:00.865714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 19:47:00.865749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1221 19:47:00.866020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 19:47:00.866020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 19:47:00.866271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1221 19:47:00.866418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1221 19:47:00.866441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 19:47:00.866464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1221 19:47:00.866557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 19:47:00.866575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 19:47:01.827291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 19:47:01.855333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1221 19:47:01.961904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 19:47:01.973084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 19:47:01.973216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 19:47:01.981471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 19:47:02.007611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 19:47:02.027588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 19:47:02.075917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 19:47:02.120519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1221 19:47:05.363031       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.549640    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^11536f40-dea6-11f0-82ef-56c101170ac3" (OuterVolumeSpecName: "task-pv-storage") pod "7dc20d42-012e-4b39-8e98-83fe50506a4a" (UID: "7dc20d42-012e-4b39-8e98-83fe50506a4a"). InnerVolumeSpecName "pvc-e3b28138-97a2-4d62-b958-bc2cc1482552". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.646915    1290 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-e3b28138-97a2-4d62-b958-bc2cc1482552\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^11536f40-dea6-11f0-82ef-56c101170ac3\") on node \"addons-734405\" "
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.646955    1290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7nhk\" (UniqueName: \"kubernetes.io/projected/7dc20d42-012e-4b39-8e98-83fe50506a4a-kube-api-access-d7nhk\") on node \"addons-734405\" DevicePath \"\""
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.652419    1290 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-e3b28138-97a2-4d62-b958-bc2cc1482552" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^11536f40-dea6-11f0-82ef-56c101170ac3") on node "addons-734405"
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.748110    1290 reconciler_common.go:299] "Volume detached for volume \"pvc-e3b28138-97a2-4d62-b958-bc2cc1482552\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^11536f40-dea6-11f0-82ef-56c101170ac3\") on node \"addons-734405\" DevicePath \"\""
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.933846    1290 scope.go:117] "RemoveContainer" containerID="f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0"
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.943192    1290 scope.go:117] "RemoveContainer" containerID="f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0"
	Dec 21 19:48:56 addons-734405 kubelet[1290]: E1221 19:48:56.943567    1290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0\": container with ID starting with f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0 not found: ID does not exist" containerID="f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0"
	Dec 21 19:48:56 addons-734405 kubelet[1290]: I1221 19:48:56.943606    1290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0"} err="failed to get container status \"f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0\": rpc error: code = NotFound desc = could not find container \"f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0\": container with ID starting with f557fbab42cc218223575a62d85477d24e07a300a9ae3863ea92f9faa3c721f0 not found: ID does not exist"
	Dec 21 19:48:57 addons-734405 kubelet[1290]: I1221 19:48:57.406860    1290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc20d42-012e-4b39-8e98-83fe50506a4a" path="/var/lib/kubelet/pods/7dc20d42-012e-4b39-8e98-83fe50506a4a/volumes"
	Dec 21 19:49:00 addons-734405 kubelet[1290]: I1221 19:49:00.404169    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-s628b" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:49:02 addons-734405 kubelet[1290]: I1221 19:49:02.404786    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jlq7q" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:49:03 addons-734405 kubelet[1290]: I1221 19:49:03.395685    1290 scope.go:117] "RemoveContainer" containerID="2cb7999baafc4c44a3124a2385272da2ac72fcb48ea758b9f0f81ebebe644c11"
	Dec 21 19:49:03 addons-734405 kubelet[1290]: I1221 19:49:03.406671    1290 scope.go:117] "RemoveContainer" containerID="fb8a0566074694e8d3354a5e07333a219b6df96477e2c35605f8aa70ef19f784"
	Dec 21 19:49:03 addons-734405 kubelet[1290]: I1221 19:49:03.414545    1290 scope.go:117] "RemoveContainer" containerID="f2f6a6972bde57b4f87ec51863a69846bfedb9125678cec7889ab3f232f9a0e6"
	Dec 21 19:49:03 addons-734405 kubelet[1290]: I1221 19:49:03.421239    1290 scope.go:117] "RemoveContainer" containerID="59ffea9676806dacb5e88b6f1af028cfce7e329607b9d6570341435c563599e4"
	Dec 21 19:49:03 addons-734405 kubelet[1290]: I1221 19:49:03.428321    1290 scope.go:117] "RemoveContainer" containerID="7fdbec5b50a96b2b5e4e6500428d10644826708a59a85e9534f369172b348a94"
	Dec 21 19:49:07 addons-734405 kubelet[1290]: I1221 19:49:07.404807    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5xdvv" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:49:24 addons-734405 kubelet[1290]: E1221 19:49:24.837443    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-8smmr" podUID="45150a37-5dac-4f62-a0c4-4044a717c870"
	Dec 21 19:49:38 addons-734405 kubelet[1290]: I1221 19:49:38.089097    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-8smmr" podStartSLOduration=148.018539938 podStartE2EDuration="2m29.089080193s" podCreationTimestamp="2025-12-21 19:47:09 +0000 UTC" firstStartedPulling="2025-12-21 19:49:36.42722044 +0000 UTC m=+153.101589899" lastFinishedPulling="2025-12-21 19:49:37.497760695 +0000 UTC m=+154.172130154" observedRunningTime="2025-12-21 19:49:38.089077782 +0000 UTC m=+154.763447275" watchObservedRunningTime="2025-12-21 19:49:38.089080193 +0000 UTC m=+154.763449675"
	Dec 21 19:50:08 addons-734405 kubelet[1290]: I1221 19:50:08.405046    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jlq7q" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:50:29 addons-734405 kubelet[1290]: I1221 19:50:29.404404    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-s628b" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:50:32 addons-734405 kubelet[1290]: I1221 19:50:32.405377    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5xdvv" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:50:51 addons-734405 kubelet[1290]: I1221 19:50:51.955432    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/71c17498-5a25-493d-a69d-bef41e224512-gcp-creds\") pod \"hello-world-app-5d498dc89-zwl54\" (UID: \"71c17498-5a25-493d-a69d-bef41e224512\") " pod="default/hello-world-app-5d498dc89-zwl54"
	Dec 21 19:50:51 addons-734405 kubelet[1290]: I1221 19:50:51.955499    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89msp\" (UniqueName: \"kubernetes.io/projected/71c17498-5a25-493d-a69d-bef41e224512-kube-api-access-89msp\") pod \"hello-world-app-5d498dc89-zwl54\" (UID: \"71c17498-5a25-493d-a69d-bef41e224512\") " pod="default/hello-world-app-5d498dc89-zwl54"
	
	
	==> storage-provisioner [23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5] <==
	W1221 19:50:28.915511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:30.917931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:30.922261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:32.925275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:32.929325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:34.931910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:34.935477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:36.937948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:36.941093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:38.944029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:38.946948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:40.949351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:40.953728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:42.957106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:42.960447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:44.962774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:44.966172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:46.969219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:46.972795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:48.975345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:48.978980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:50.982781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:50.986380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:52.989385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:50:52.992885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-734405 -n addons-734405
helpers_test.go:270: (dbg) Run:  kubectl --context addons-734405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-734405 describe pod ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-734405 describe pod ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn: exit status 1 (54.382653ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r2l6g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gp4pn" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-734405 describe pod ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (232.930207ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:50:54.185000   28991 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:50:54.185150   28991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:50:54.185162   28991 out.go:374] Setting ErrFile to fd 2...
	I1221 19:50:54.185168   28991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:50:54.185390   28991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:50:54.185658   28991 mustload.go:66] Loading cluster: addons-734405
	I1221 19:50:54.185961   28991 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:50:54.185986   28991 addons.go:622] checking whether the cluster is paused
	I1221 19:50:54.186083   28991 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:50:54.186100   28991 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:50:54.186544   28991 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:50:54.203864   28991 ssh_runner.go:195] Run: systemctl --version
	I1221 19:50:54.203919   28991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:50:54.219746   28991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:50:54.314154   28991 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:50:54.314267   28991 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:50:54.342304   28991 cri.go:96] found id: "fd7f7dd0fdf87b7dcc9d68b1726de3bd1e6a2dc4bb42c3d1720b424046c4f916"
	I1221 19:50:54.342342   28991 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:50:54.342347   28991 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:50:54.342350   28991 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:50:54.342352   28991 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:50:54.342357   28991 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:50:54.342360   28991 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:50:54.342363   28991 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:50:54.342365   28991 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:50:54.342372   28991 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:50:54.342375   28991 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:50:54.342378   28991 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:50:54.342381   28991 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:50:54.342384   28991 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:50:54.342387   28991 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:50:54.342394   28991 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:50:54.342397   28991 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:50:54.342401   28991 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:50:54.342404   28991 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:50:54.342406   28991 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:50:54.342415   28991 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:50:54.342418   28991 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:50:54.342421   28991 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:50:54.342423   28991 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:50:54.342426   28991 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:50:54.342429   28991 cri.go:96] found id: ""
	I1221 19:50:54.342466   28991 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:50:54.356624   28991 out.go:203] 
	W1221 19:50:54.357843   28991 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:50:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:50:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:50:54.357863   28991 out.go:285] * 
	* 
	W1221 19:50:54.360848   28991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:50:54.361970   28991 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable ingress --alsologtostderr -v=1: exit status 11 (231.582353ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:50:54.417748   29053 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:50:54.417902   29053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:50:54.417915   29053 out.go:374] Setting ErrFile to fd 2...
	I1221 19:50:54.417922   29053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:50:54.418172   29053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:50:54.418480   29053 mustload.go:66] Loading cluster: addons-734405
	I1221 19:50:54.418799   29053 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:50:54.418820   29053 addons.go:622] checking whether the cluster is paused
	I1221 19:50:54.418911   29053 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:50:54.418928   29053 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:50:54.419390   29053 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:50:54.436937   29053 ssh_runner.go:195] Run: systemctl --version
	I1221 19:50:54.437002   29053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:50:54.452822   29053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:50:54.547291   29053 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:50:54.547383   29053 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:50:54.574633   29053 cri.go:96] found id: "fd7f7dd0fdf87b7dcc9d68b1726de3bd1e6a2dc4bb42c3d1720b424046c4f916"
	I1221 19:50:54.574657   29053 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:50:54.574663   29053 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:50:54.574668   29053 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:50:54.574674   29053 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:50:54.574679   29053 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:50:54.574684   29053 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:50:54.574689   29053 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:50:54.574693   29053 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:50:54.574706   29053 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:50:54.574712   29053 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:50:54.574715   29053 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:50:54.574718   29053 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:50:54.574720   29053 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:50:54.574723   29053 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:50:54.574727   29053 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:50:54.574730   29053 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:50:54.574733   29053 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:50:54.574736   29053 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:50:54.574738   29053 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:50:54.574743   29053 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:50:54.574746   29053 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:50:54.574749   29053 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:50:54.574752   29053 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:50:54.574755   29053 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:50:54.574757   29053 cri.go:96] found id: ""
	I1221 19:50:54.574792   29053 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:50:54.588154   29053 out.go:203] 
	W1221 19:50:54.589316   29053 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:50:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:50:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:50:54.589336   29053 out.go:285] * 
	* 
	W1221 19:50:54.592207   29053 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:50:54.593452   29053 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (149.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-lvc5c" [69e4ee7b-1e7f-44e5-bc56-a741483d7c5e] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002377759s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (239.565695ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:33.242771   25953 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:33.243032   25953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:33.243042   25953 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:33.243047   25953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:33.243258   25953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:33.243507   25953 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:33.244546   25953 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:33.244582   25953 addons.go:622] checking whether the cluster is paused
	I1221 19:48:33.244888   25953 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:33.244912   25953 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:33.245392   25953 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:33.263310   25953 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:33.263356   25953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:33.280128   25953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:33.376629   25953 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:33.376718   25953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:33.404722   25953 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:33.404746   25953 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:33.404752   25953 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:33.404757   25953 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:33.404762   25953 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:33.404767   25953 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:33.404772   25953 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:33.404776   25953 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:33.404780   25953 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:33.404788   25953 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:33.404793   25953 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:33.404797   25953 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:33.404804   25953 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:33.404809   25953 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:33.404814   25953 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:33.404821   25953 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:33.404825   25953 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:33.404832   25953 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:33.404842   25953 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:33.404847   25953 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:33.404855   25953 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:33.404860   25953 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:33.404868   25953 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:33.404873   25953 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:33.404880   25953 cri.go:96] found id: ""
	I1221 19:48:33.404925   25953 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:33.418963   25953 out.go:203] 
	W1221 19:48:33.420151   25953 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:33.420173   25953 out.go:285] * 
	* 
	W1221 19:48:33.423419   25953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:33.424811   25953 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.306522ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002304877s
addons_test.go:465: (dbg) Run:  kubectl --context addons-734405 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (236.968985ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:15.568860   23734 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:15.568993   23734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:15.569002   23734 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:15.569006   23734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:15.569193   23734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:15.569470   23734 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:15.569751   23734 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:15.569772   23734 addons.go:622] checking whether the cluster is paused
	I1221 19:48:15.569869   23734 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:15.569887   23734 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:15.570337   23734 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:15.587266   23734 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:15.587311   23734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:15.604263   23734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:15.698265   23734 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:15.698341   23734 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:15.729142   23734 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:15.729166   23734 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:15.729172   23734 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:15.729177   23734 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:15.729185   23734 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:15.729202   23734 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:15.729211   23734 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:15.729216   23734 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:15.729242   23734 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:15.729252   23734 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:15.729259   23734 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:15.729264   23734 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:15.729270   23734 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:15.729276   23734 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:15.729283   23734 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:15.729290   23734 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:15.729294   23734 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:15.729299   23734 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:15.729308   23734 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:15.729313   23734 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:15.729332   23734 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:15.729340   23734 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:15.729345   23734 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:15.729352   23734 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:15.729357   23734 cri.go:96] found id: ""
	I1221 19:48:15.729414   23734 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:15.743290   23734 out.go:203] 
	W1221 19:48:15.744457   23734 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:15.744472   23734 out.go:285] * 
	* 
	W1221 19:48:15.747402   23734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:15.748703   23734 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1221 19:48:11.857911   12711 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1221 19:48:11.861360   12711 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1221 19:48:11.861388   12711 kapi.go:107] duration metric: took 3.491052ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.503722ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-734405 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/12/21 19:48:23 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-734405 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [5d5a1e2d-1c39-42d3-afd6-f62118176ed7] Pending
helpers_test.go:353: "task-pv-pod" [5d5a1e2d-1c39-42d3-afd6-f62118176ed7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [5d5a1e2d-1c39-42d3-afd6-f62118176ed7] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.002834816s
addons_test.go:574: (dbg) Run:  kubectl --context addons-734405 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-734405 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-734405 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-734405 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-734405 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-734405 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-734405 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [7dc20d42-012e-4b39-8e98-83fe50506a4a] Pending
helpers_test.go:353: "task-pv-pod-restore" [7dc20d42-012e-4b39-8e98-83fe50506a4a] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.002919238s
addons_test.go:616: (dbg) Run:  kubectl --context addons-734405 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-734405 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-734405 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (236.32885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:57.322679   26658 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:57.322982   26658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:57.322992   26658 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:57.322996   26658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:57.323159   26658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:57.323448   26658 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:57.323790   26658 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:57.323808   26658 addons.go:622] checking whether the cluster is paused
	I1221 19:48:57.323889   26658 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:57.323902   26658 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:57.324296   26658 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:57.342367   26658 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:57.342437   26658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:57.359502   26658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:57.455346   26658 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:57.455417   26658 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:57.482807   26658 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:57.482827   26658 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:57.482832   26658 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:57.482834   26658 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:57.482837   26658 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:57.482840   26658 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:57.482843   26658 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:57.482846   26658 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:57.482848   26658 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:57.482862   26658 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:57.482867   26658 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:57.482872   26658 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:57.482876   26658 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:57.482880   26658 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:57.482885   26658 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:57.482893   26658 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:57.482898   26658 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:57.482904   26658 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:57.482907   26658 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:57.482910   26658 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:57.482912   26658 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:57.482915   26658 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:57.482918   26658 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:57.482920   26658 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:57.482923   26658 cri.go:96] found id: ""
	I1221 19:48:57.482964   26658 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:57.496978   26658 out.go:203] 
	W1221 19:48:57.498140   26658 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:57.498158   26658 out.go:285] * 
	* 
	W1221 19:48:57.501099   26658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:57.502313   26658 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (236.114595ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:57.559435   26718 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:57.559579   26718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:57.559589   26718 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:57.559593   26718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:57.559821   26718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:57.560094   26718 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:57.560460   26718 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:57.560482   26718 addons.go:622] checking whether the cluster is paused
	I1221 19:48:57.560579   26718 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:57.560604   26718 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:57.560993   26718 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:57.578015   26718 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:57.578071   26718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:57.594420   26718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:57.690479   26718 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:57.690560   26718 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:57.719445   26718 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:57.719471   26718 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:57.719478   26718 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:57.719483   26718 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:57.719488   26718 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:57.719493   26718 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:57.719497   26718 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:57.719502   26718 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:57.719517   26718 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:57.719525   26718 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:57.719530   26718 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:57.719553   26718 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:57.719565   26718 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:57.719570   26718 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:57.719585   26718 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:57.719592   26718 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:57.719595   26718 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:57.719599   26718 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:57.719601   26718 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:57.719604   26718 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:57.719609   26718 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:57.719620   26718 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:57.719625   26718 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:57.719627   26718 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:57.719630   26718 cri.go:96] found id: ""
	I1221 19:48:57.719666   26718 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:57.732799   26718 out.go:203] 
	W1221 19:48:57.734244   26718 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:57.734267   26718 out.go:285] * 
	* 
	W1221 19:48:57.737192   26718 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:57.738532   26718 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (45.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-734405 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-734405 --alsologtostderr -v=1: exit status 11 (247.718969ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:09.504151   22828 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:09.504460   22828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:09.504470   22828 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:09.504474   22828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:09.504675   22828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:09.504914   22828 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:09.505247   22828 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:09.505267   22828 addons.go:622] checking whether the cluster is paused
	I1221 19:48:09.505347   22828 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:09.505366   22828 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:09.505750   22828 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:09.522703   22828 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:09.522742   22828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:09.539851   22828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:09.634512   22828 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:09.634603   22828 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:09.663581   22828 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:09.663623   22828 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:09.663627   22828 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:09.663631   22828 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:09.663634   22828 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:09.663638   22828 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:09.663641   22828 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:09.663644   22828 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:09.663646   22828 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:09.663656   22828 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:09.663659   22828 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:09.663662   22828 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:09.663665   22828 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:09.663667   22828 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:09.663670   22828 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:09.663680   22828 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:09.663685   22828 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:09.663689   22828 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:09.663692   22828 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:09.663694   22828 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:09.663697   22828 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:09.663700   22828 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:09.663702   22828 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:09.663705   22828 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:09.663708   22828 cri.go:96] found id: ""
	I1221 19:48:09.663762   22828 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:09.677780   22828 out.go:203] 
	W1221 19:48:09.679029   22828 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:09.679059   22828 out.go:285] * 
	* 
	W1221 19:48:09.683139   22828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:09.684301   22828 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-734405 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-734405
helpers_test.go:244: (dbg) docker inspect addons-734405:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b",
	        "Created": "2025-12-21T19:46:47.567938506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T19:46:47.602137045Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/hosts",
	        "LogPath": "/var/lib/docker/containers/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b/f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b-json.log",
	        "Name": "/addons-734405",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-734405:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-734405",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f342f561decce0c5c994cb9e9a96e8f75cf05aab1cd1545c32bafd16d6d0da1b",
	                "LowerDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/272b55b94d2f93f55db41749cd968ebd72f56ee1259b966f12182e59ffac95d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-734405",
	                "Source": "/var/lib/docker/volumes/addons-734405/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-734405",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-734405",
	                "name.minikube.sigs.k8s.io": "addons-734405",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3f3b83537715c70ffa6b8f14ff988ae577eac2f8ef7a89766945f782ca7b803e",
	            "SandboxKey": "/var/run/docker/netns/3f3b83537715",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-734405": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8cb8a005cb45712daf0fdc43f6bb5ec21f904d698b1f455f3203b92bae54f643",
	                    "EndpointID": "fa34e51fd9c93c6f6d93729555a31d9217b7c42bdff558507337bb24e9eda25b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:35:ab:b2:29:04",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-734405",
	                        "f342f561decc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-734405 -n addons-734405
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-734405 logs -n 25: (1.040386604s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-940314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-940314   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-940314                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-940314   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-650604 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-650604   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-650604                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-650604   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-551976 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-551976   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-551976                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-551976   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-940314                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-940314   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-650604                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-650604   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-551976                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-551976   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ --download-only -p download-docker-556619 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-556619 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ -p download-docker-556619                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-556619 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ --download-only -p binary-mirror-301733 --alsologtostderr --binary-mirror http://127.0.0.1:43353 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-301733   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ -p binary-mirror-301733                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-301733   │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ addons  │ disable dashboard -p addons-734405                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-734405          │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ addons  │ enable dashboard -p addons-734405                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-734405          │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ start   │ -p addons-734405 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-734405          │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:48 UTC │
	│ addons  │ addons-734405 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-734405          │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ addons-734405 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-734405          │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	│ addons  │ enable headlamp -p addons-734405 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-734405          │ jenkins │ v1.37.0 │ 21 Dec 25 19:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:24.993009   14485 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:24.993269   14485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:24.993279   14485 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:24.993286   14485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:24.993478   14485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:46:24.994068   14485 out.go:368] Setting JSON to false
	I1221 19:46:24.994822   14485 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1734,"bootTime":1766344651,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:24.994870   14485 start.go:143] virtualization: kvm guest
	I1221 19:46:24.996530   14485 out.go:179] * [addons-734405] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:24.997947   14485 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:46:24.997941   14485 notify.go:221] Checking for updates...
	I1221 19:46:25.000136   14485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:25.001621   14485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:46:25.002814   14485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:46:25.004069   14485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:46:25.005427   14485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:46:25.006643   14485 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:46:25.028740   14485 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:46:25.028870   14485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:25.082959   14485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-21 19:46:25.074335609 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:25.083053   14485 docker.go:319] overlay module found
	I1221 19:46:25.084689   14485 out.go:179] * Using the docker driver based on user configuration
	I1221 19:46:25.085754   14485 start.go:309] selected driver: docker
	I1221 19:46:25.085766   14485 start.go:928] validating driver "docker" against <nil>
	I1221 19:46:25.085777   14485 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:46:25.086331   14485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:25.142090   14485 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-21 19:46:25.132633111 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:25.142277   14485 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 19:46:25.142484   14485 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 19:46:25.144097   14485 out.go:179] * Using Docker driver with root privileges
	I1221 19:46:25.145146   14485 cni.go:84] Creating CNI manager for ""
	I1221 19:46:25.145212   14485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 19:46:25.145251   14485 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 19:46:25.145318   14485 start.go:353] cluster config:
	{Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1221 19:46:25.146640   14485 out.go:179] * Starting "addons-734405" primary control-plane node in "addons-734405" cluster
	I1221 19:46:25.147631   14485 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 19:46:25.148718   14485 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 19:46:25.149770   14485 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:25.149793   14485 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 19:46:25.149799   14485 cache.go:65] Caching tarball of preloaded images
	I1221 19:46:25.149800   14485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 19:46:25.149888   14485 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 19:46:25.149900   14485 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 19:46:25.150168   14485 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/config.json ...
	I1221 19:46:25.150188   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/config.json: {Name:mk3e65bc3be6a489d858bc2169da4b8071c2bfb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:25.165083   14485 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1221 19:46:25.165185   14485 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1221 19:46:25.165200   14485 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1221 19:46:25.165204   14485 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1221 19:46:25.165210   14485 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1221 19:46:25.165217   14485 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 from local cache
	I1221 19:46:38.946212   14485 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 from cached tarball
	I1221 19:46:38.946269   14485 cache.go:243] Successfully downloaded all kic artifacts
	I1221 19:46:38.946316   14485 start.go:360] acquireMachinesLock for addons-734405: {Name:mk30b118a4bdc15e39537bd7efedc75e73779231 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 19:46:38.946421   14485 start.go:364] duration metric: took 84.092µs to acquireMachinesLock for "addons-734405"
	I1221 19:46:38.946444   14485 start.go:93] Provisioning new machine with config: &{Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 19:46:38.946524   14485 start.go:125] createHost starting for "" (driver="docker")
	I1221 19:46:39.068166   14485 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1221 19:46:39.068428   14485 start.go:159] libmachine.API.Create for "addons-734405" (driver="docker")
	I1221 19:46:39.068463   14485 client.go:173] LocalClient.Create starting
	I1221 19:46:39.068605   14485 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem
	I1221 19:46:39.103936   14485 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem
	I1221 19:46:39.177705   14485 cli_runner.go:164] Run: docker network inspect addons-734405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 19:46:39.195267   14485 cli_runner.go:211] docker network inspect addons-734405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 19:46:39.195335   14485 network_create.go:284] running [docker network inspect addons-734405] to gather additional debugging logs...
	I1221 19:46:39.195355   14485 cli_runner.go:164] Run: docker network inspect addons-734405
	W1221 19:46:39.210762   14485 cli_runner.go:211] docker network inspect addons-734405 returned with exit code 1
	I1221 19:46:39.210789   14485 network_create.go:287] error running [docker network inspect addons-734405]: docker network inspect addons-734405: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-734405 not found
	I1221 19:46:39.210805   14485 network_create.go:289] output of [docker network inspect addons-734405]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-734405 not found
	
	** /stderr **
	I1221 19:46:39.210982   14485 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 19:46:39.226924   14485 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc0530}
	I1221 19:46:39.226956   14485 network_create.go:124] attempt to create docker network addons-734405 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1221 19:46:39.227005   14485 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-734405 addons-734405
	I1221 19:46:39.432629   14485 network_create.go:108] docker network addons-734405 192.168.49.0/24 created
	I1221 19:46:39.432657   14485 kic.go:121] calculated static IP "192.168.49.2" for the "addons-734405" container
	I1221 19:46:39.432733   14485 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 19:46:39.448411   14485 cli_runner.go:164] Run: docker volume create addons-734405 --label name.minikube.sigs.k8s.io=addons-734405 --label created_by.minikube.sigs.k8s.io=true
	I1221 19:46:39.512158   14485 oci.go:103] Successfully created a docker volume addons-734405
	I1221 19:46:39.512239   14485 cli_runner.go:164] Run: docker run --rm --name addons-734405-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-734405 --entrypoint /usr/bin/test -v addons-734405:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1221 19:46:43.723408   14485 cli_runner.go:217] Completed: docker run --rm --name addons-734405-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-734405 --entrypoint /usr/bin/test -v addons-734405:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib: (4.211128594s)
	I1221 19:46:43.723437   14485 oci.go:107] Successfully prepared a docker volume addons-734405
	I1221 19:46:43.723515   14485 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:43.723529   14485 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 19:46:43.723601   14485 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-734405:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 19:46:47.500107   14485 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-734405:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.776467341s)
	I1221 19:46:47.500144   14485 kic.go:203] duration metric: took 3.776610597s to extract preloaded images to volume ...
	W1221 19:46:47.500297   14485 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1221 19:46:47.500348   14485 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1221 19:46:47.500404   14485 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 19:46:47.552778   14485 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-734405 --name addons-734405 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-734405 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-734405 --network addons-734405 --ip 192.168.49.2 --volume addons-734405:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1221 19:46:47.830782   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Running}}
	I1221 19:46:47.848551   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:46:47.865280   14485 cli_runner.go:164] Run: docker exec addons-734405 stat /var/lib/dpkg/alternatives/iptables
	I1221 19:46:47.911527   14485 oci.go:144] the created container "addons-734405" has a running status.
	I1221 19:46:47.911556   14485 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa...
	I1221 19:46:47.992295   14485 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 19:46:48.015900   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:46:48.032667   14485 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 19:46:48.032691   14485 kic_runner.go:114] Args: [docker exec --privileged addons-734405 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 19:46:48.099488   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:46:48.124455   14485 machine.go:94] provisionDockerMachine start ...
	I1221 19:46:48.124571   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:48.146385   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:48.146732   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:48.146751   14485 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 19:46:48.147967   14485 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38330->127.0.0.1:32768: read: connection reset by peer
	I1221 19:46:51.280660   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-734405
	
	I1221 19:46:51.280687   14485 ubuntu.go:182] provisioning hostname "addons-734405"
	I1221 19:46:51.280749   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.297672   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:51.297886   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:51.297898   14485 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-734405 && echo "addons-734405" | sudo tee /etc/hostname
	I1221 19:46:51.440349   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-734405
	
	I1221 19:46:51.440427   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.457308   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:51.457542   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:51.457566   14485 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-734405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-734405/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-734405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 19:46:51.590291   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 19:46:51.590333   14485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 19:46:51.590354   14485 ubuntu.go:190] setting up certificates
	I1221 19:46:51.590369   14485 provision.go:84] configureAuth start
	I1221 19:46:51.590418   14485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-734405
	I1221 19:46:51.607414   14485 provision.go:143] copyHostCerts
	I1221 19:46:51.607490   14485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 19:46:51.607594   14485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 19:46:51.607646   14485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 19:46:51.607695   14485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.addons-734405 san=[127.0.0.1 192.168.49.2 addons-734405 localhost minikube]
	I1221 19:46:51.665172   14485 provision.go:177] copyRemoteCerts
	I1221 19:46:51.665253   14485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 19:46:51.665294   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.683101   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:51.778740   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 19:46:51.796257   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1221 19:46:51.812326   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 19:46:51.827762   14485 provision.go:87] duration metric: took 237.37976ms to configureAuth
	I1221 19:46:51.827791   14485 ubuntu.go:206] setting minikube options for container-runtime
	I1221 19:46:51.827950   14485 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:46:51.828043   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:51.844642   14485 main.go:144] libmachine: Using SSH client type: native
	I1221 19:46:51.844844   14485 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1221 19:46:51.844859   14485 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 19:46:52.107443   14485 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 19:46:52.107466   14485 machine.go:97] duration metric: took 3.982985531s to provisionDockerMachine
	I1221 19:46:52.107479   14485 client.go:176] duration metric: took 13.039007838s to LocalClient.Create
	I1221 19:46:52.107506   14485 start.go:167] duration metric: took 13.039079196s to libmachine.API.Create "addons-734405"
	I1221 19:46:52.107518   14485 start.go:293] postStartSetup for "addons-734405" (driver="docker")
	I1221 19:46:52.107532   14485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 19:46:52.107592   14485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 19:46:52.107643   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.124968   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.222393   14485 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 19:46:52.225632   14485 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 19:46:52.225654   14485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 19:46:52.225663   14485 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 19:46:52.225722   14485 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 19:46:52.225750   14485 start.go:296] duration metric: took 118.224774ms for postStartSetup
	I1221 19:46:52.226028   14485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-734405
	I1221 19:46:52.243062   14485 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/config.json ...
	I1221 19:46:52.243331   14485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 19:46:52.243372   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.260291   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.353370   14485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 19:46:52.357588   14485 start.go:128] duration metric: took 13.411046704s to createHost
	I1221 19:46:52.357615   14485 start.go:83] releasing machines lock for "addons-734405", held for 13.411183076s
	I1221 19:46:52.357673   14485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-734405
	I1221 19:46:52.375593   14485 ssh_runner.go:195] Run: cat /version.json
	I1221 19:46:52.375637   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.375676   14485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 19:46:52.375735   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:46:52.393927   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.394207   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:46:52.486293   14485 ssh_runner.go:195] Run: systemctl --version
	I1221 19:46:52.538541   14485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 19:46:52.571399   14485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 19:46:52.575686   14485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 19:46:52.575749   14485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 19:46:52.599848   14485 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 19:46:52.599870   14485 start.go:496] detecting cgroup driver to use...
	I1221 19:46:52.599899   14485 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 19:46:52.599945   14485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 19:46:52.614887   14485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 19:46:52.626292   14485 docker.go:218] disabling cri-docker service (if available) ...
	I1221 19:46:52.626355   14485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 19:46:52.641610   14485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 19:46:52.657577   14485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 19:46:52.736346   14485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 19:46:52.818380   14485 docker.go:234] disabling docker service ...
	I1221 19:46:52.818433   14485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 19:46:52.835383   14485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 19:46:52.846941   14485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 19:46:52.925835   14485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 19:46:53.003549   14485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 19:46:53.015001   14485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 19:46:53.027628   14485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 19:46:53.027687   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.036845   14485 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 19:46:53.036904   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.044690   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.052464   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.060376   14485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 19:46:53.067525   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.075408   14485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.087256   14485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 19:46:53.094903   14485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 19:46:53.101259   14485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1221 19:46:53.101310   14485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1221 19:46:53.111993   14485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 19:46:53.119773   14485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:46:53.195750   14485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 19:46:53.321793   14485 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 19:46:53.321866   14485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 19:46:53.325747   14485 start.go:564] Will wait 60s for crictl version
	I1221 19:46:53.325802   14485 ssh_runner.go:195] Run: which crictl
	I1221 19:46:53.329286   14485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 19:46:53.353176   14485 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 19:46:53.353306   14485 ssh_runner.go:195] Run: crio --version
	I1221 19:46:53.379302   14485 ssh_runner.go:195] Run: crio --version
	I1221 19:46:53.407066   14485 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 19:46:53.408164   14485 cli_runner.go:164] Run: docker network inspect addons-734405 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 19:46:53.424480   14485 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1221 19:46:53.428544   14485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 19:46:53.438103   14485 kubeadm.go:884] updating cluster {Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 19:46:53.438216   14485 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 19:46:53.438288   14485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 19:46:53.466408   14485 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 19:46:53.466435   14485 crio.go:433] Images already preloaded, skipping extraction
	I1221 19:46:53.466484   14485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 19:46:53.490134   14485 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 19:46:53.490156   14485 cache_images.go:86] Images are preloaded, skipping loading
	I1221 19:46:53.490163   14485 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1221 19:46:53.490301   14485 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-734405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 19:46:53.490400   14485 ssh_runner.go:195] Run: crio config
	I1221 19:46:53.532618   14485 cni.go:84] Creating CNI manager for ""
	I1221 19:46:53.532640   14485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 19:46:53.532656   14485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 19:46:53.532682   14485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-734405 NodeName:addons-734405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 19:46:53.532827   14485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-734405"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 19:46:53.532901   14485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 19:46:53.540663   14485 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 19:46:53.540715   14485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 19:46:53.547896   14485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1221 19:46:53.559635   14485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 19:46:53.573672   14485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1221 19:46:53.585142   14485 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1221 19:46:53.588479   14485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 19:46:53.597744   14485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:46:53.675621   14485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 19:46:53.700473   14485 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405 for IP: 192.168.49.2
	I1221 19:46:53.700498   14485 certs.go:195] generating shared ca certs ...
	I1221 19:46:53.700515   14485 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.700648   14485 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 19:46:53.798118   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt ...
	I1221 19:46:53.798153   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt: {Name:mk670d7a9ae2f463db74b60744ff0c0716b9481f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.798360   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key ...
	I1221 19:46:53.798376   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key: {Name:mk386ce7a21cb5370b96f28cf7c9eea7f93f736f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.798483   14485 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 19:46:53.881850   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt ...
	I1221 19:46:53.881878   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt: {Name:mk2a09c52952c55f436663f02992211eb851389c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.882068   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key ...
	I1221 19:46:53.882090   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key: {Name:mka668d20d09552540510629dda9e7183fc65f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.882193   14485 certs.go:257] generating profile certs ...
	I1221 19:46:53.882287   14485 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.key
	I1221 19:46:53.882306   14485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt with IP's: []
	I1221 19:46:53.949191   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt ...
	I1221 19:46:53.949218   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: {Name:mk437ac45795a9ed2517fed6abf64052104e2d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.949415   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.key ...
	I1221 19:46:53.949432   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.key: {Name:mk9c09a3d061b3b4e9b040df0f0accdecc9a4b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:53.949538   14485 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92
	I1221 19:46:53.949567   14485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1221 19:46:54.027546   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92 ...
	I1221 19:46:54.027574   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92: {Name:mk65654c4f0d07db51693ce8d6fc85c1eb412fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.027773   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92 ...
	I1221 19:46:54.027789   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92: {Name:mk02396785b2aed0fb1b15f2b0e09b14f6971ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.027892   14485 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt.b70e8f92 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt
	I1221 19:46:54.028014   14485 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key.b70e8f92 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key
	I1221 19:46:54.028097   14485 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key
	I1221 19:46:54.028122   14485 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt with IP's: []
	I1221 19:46:54.112462   14485 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt ...
	I1221 19:46:54.112492   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt: {Name:mk86548759bdc0f34bd53e7dc810bdaf3f116117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.112677   14485 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key ...
	I1221 19:46:54.112700   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key: {Name:mk480430c5e33c65911aa0287b165fd3685694cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:46:54.112938   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 19:46:54.112986   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 19:46:54.113024   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 19:46:54.113052   14485 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 19:46:54.113695   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 19:46:54.131466   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 19:46:54.147643   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 19:46:54.163435   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 19:46:54.179184   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1221 19:46:54.195249   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 19:46:54.211184   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 19:46:54.227013   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 19:46:54.242711   14485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 19:46:54.260120   14485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 19:46:54.271582   14485 ssh_runner.go:195] Run: openssl version
	I1221 19:46:54.277207   14485 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.283919   14485 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 19:46:54.292635   14485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.295875   14485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.295918   14485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 19:46:54.329879   14485 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 19:46:54.337475   14485 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 19:46:54.344331   14485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 19:46:54.347736   14485 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 19:46:54.347781   14485 kubeadm.go:401] StartCluster: {Name:addons-734405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-734405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:46:54.347861   14485 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:46:54.347900   14485 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:46:54.372488   14485 cri.go:96] found id: ""
	I1221 19:46:54.372545   14485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 19:46:54.379908   14485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 19:46:54.387709   14485 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 19:46:54.387756   14485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 19:46:54.394754   14485 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 19:46:54.394773   14485 kubeadm.go:158] found existing configuration files:
	
	I1221 19:46:54.394804   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 19:46:54.402155   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 19:46:54.402204   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 19:46:54.408760   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 19:46:54.415268   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 19:46:54.415312   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 19:46:54.421894   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 19:46:54.428420   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 19:46:54.428460   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 19:46:54.434952   14485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 19:46:54.441795   14485 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 19:46:54.441844   14485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 19:46:54.448618   14485 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 19:46:54.484709   14485 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1221 19:46:54.484816   14485 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 19:46:54.503707   14485 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1221 19:46:54.503798   14485 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1221 19:46:54.503850   14485 kubeadm.go:319] OS: Linux
	I1221 19:46:54.503921   14485 kubeadm.go:319] CGROUPS_CPU: enabled
	I1221 19:46:54.503966   14485 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1221 19:46:54.504006   14485 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1221 19:46:54.504045   14485 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1221 19:46:54.504085   14485 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1221 19:46:54.504150   14485 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1221 19:46:54.504248   14485 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1221 19:46:54.504325   14485 kubeadm.go:319] CGROUPS_IO: enabled
	I1221 19:46:54.558098   14485 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 19:46:54.558260   14485 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 19:46:54.558374   14485 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 19:46:54.565166   14485 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 19:46:54.566964   14485 out.go:252]   - Generating certificates and keys ...
	I1221 19:46:54.567036   14485 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 19:46:54.567112   14485 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 19:46:54.644803   14485 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 19:46:55.164760   14485 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 19:46:55.577189   14485 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 19:46:55.876457   14485 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 19:46:56.144051   14485 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 19:46:56.144167   14485 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-734405 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 19:46:56.375892   14485 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 19:46:56.376031   14485 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-734405 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 19:46:56.541731   14485 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 19:46:56.827606   14485 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 19:46:56.934599   14485 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 19:46:56.934669   14485 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 19:46:57.225363   14485 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 19:46:57.448883   14485 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 19:46:57.548595   14485 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 19:46:57.692930   14485 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 19:46:58.130796   14485 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 19:46:58.131248   14485 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 19:46:58.134555   14485 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 19:46:58.135745   14485 out.go:252]   - Booting up control plane ...
	I1221 19:46:58.135826   14485 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 19:46:58.135897   14485 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 19:46:58.136611   14485 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 19:46:58.163581   14485 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 19:46:58.163696   14485 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 19:46:58.169678   14485 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 19:46:58.169887   14485 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 19:46:58.169948   14485 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 19:46:58.266281   14485 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 19:46:58.266445   14485 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 19:46:59.267856   14485 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001710647s
	I1221 19:46:59.270760   14485 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 19:46:59.270876   14485 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1221 19:46:59.270983   14485 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 19:46:59.271107   14485 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 19:47:00.849691   14485 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.575766665s
	I1221 19:47:00.868186   14485 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.597348843s
	I1221 19:47:02.772350   14485 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501521761s
	I1221 19:47:02.787497   14485 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 19:47:02.796654   14485 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 19:47:02.804306   14485 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 19:47:02.804479   14485 kubeadm.go:319] [mark-control-plane] Marking the node addons-734405 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 19:47:02.811401   14485 kubeadm.go:319] [bootstrap-token] Using token: ah16bj.w0eka582y48hwab4
	I1221 19:47:02.812668   14485 out.go:252]   - Configuring RBAC rules ...
	I1221 19:47:02.812816   14485 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 19:47:02.815239   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 19:47:02.819342   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 19:47:02.821359   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 19:47:02.824192   14485 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 19:47:02.826150   14485 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 19:47:03.177340   14485 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 19:47:03.591300   14485 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1221 19:47:04.176981   14485 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1221 19:47:04.177691   14485 kubeadm.go:319] 
	I1221 19:47:04.177785   14485 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1221 19:47:04.177804   14485 kubeadm.go:319] 
	I1221 19:47:04.177894   14485 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1221 19:47:04.177912   14485 kubeadm.go:319] 
	I1221 19:47:04.177950   14485 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1221 19:47:04.178035   14485 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 19:47:04.178078   14485 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 19:47:04.178103   14485 kubeadm.go:319] 
	I1221 19:47:04.178191   14485 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1221 19:47:04.178200   14485 kubeadm.go:319] 
	I1221 19:47:04.178295   14485 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 19:47:04.178304   14485 kubeadm.go:319] 
	I1221 19:47:04.178379   14485 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1221 19:47:04.178492   14485 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 19:47:04.178586   14485 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 19:47:04.178600   14485 kubeadm.go:319] 
	I1221 19:47:04.178702   14485 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 19:47:04.178801   14485 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1221 19:47:04.178813   14485 kubeadm.go:319] 
	I1221 19:47:04.178915   14485 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ah16bj.w0eka582y48hwab4 \
	I1221 19:47:04.179057   14485 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 \
	I1221 19:47:04.179090   14485 kubeadm.go:319] 	--control-plane 
	I1221 19:47:04.179098   14485 kubeadm.go:319] 
	I1221 19:47:04.179244   14485 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1221 19:47:04.179263   14485 kubeadm.go:319] 
	I1221 19:47:04.179340   14485 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ah16bj.w0eka582y48hwab4 \
	I1221 19:47:04.179481   14485 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 
	I1221 19:47:04.180859   14485 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 19:47:04.180993   14485 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 19:47:04.181014   14485 cni.go:84] Creating CNI manager for ""
	I1221 19:47:04.181025   14485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 19:47:04.183169   14485 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1221 19:47:04.184248   14485 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 19:47:04.188324   14485 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1221 19:47:04.188341   14485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1221 19:47:04.200569   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 19:47:04.391219   14485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 19:47:04.391316   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:04.391386   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-734405 minikube.k8s.io/updated_at=2025_12_21T19_47_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=addons-734405 minikube.k8s.io/primary=true
	I1221 19:47:04.400300   14485 ops.go:34] apiserver oom_adj: -16
	I1221 19:47:04.455865   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:04.956122   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:05.456346   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:05.956565   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:06.456521   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:06.956013   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:07.456314   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:07.956529   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:08.456679   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:08.956418   14485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 19:47:09.025099   14485 kubeadm.go:1114] duration metric: took 4.633835152s to wait for elevateKubeSystemPrivileges
	I1221 19:47:09.025137   14485 kubeadm.go:403] duration metric: took 14.677358336s to StartCluster
	I1221 19:47:09.025159   14485 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:47:09.025316   14485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:47:09.025691   14485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 19:47:09.025878   14485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 19:47:09.025913   14485 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 19:47:09.025980   14485 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1221 19:47:09.026100   14485 addons.go:70] Setting yakd=true in profile "addons-734405"
	I1221 19:47:09.026125   14485 addons.go:239] Setting addon yakd=true in "addons-734405"
	I1221 19:47:09.026141   14485 addons.go:70] Setting inspektor-gadget=true in profile "addons-734405"
	I1221 19:47:09.026160   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026166   14485 addons.go:239] Setting addon inspektor-gadget=true in "addons-734405"
	I1221 19:47:09.026185   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026199   14485 addons.go:70] Setting metrics-server=true in profile "addons-734405"
	I1221 19:47:09.026218   14485 addons.go:239] Setting addon metrics-server=true in "addons-734405"
	I1221 19:47:09.026216   14485 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:47:09.026283   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026288   14485 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-734405"
	I1221 19:47:09.026305   14485 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-734405"
	I1221 19:47:09.026317   14485 addons.go:70] Setting default-storageclass=true in profile "addons-734405"
	I1221 19:47:09.026364   14485 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-734405"
	I1221 19:47:09.026625   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026708   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026720   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026735   14485 addons.go:70] Setting volcano=true in profile "addons-734405"
	I1221 19:47:09.026749   14485 addons.go:239] Setting addon volcano=true in "addons-734405"
	I1221 19:47:09.026770   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026790   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027054   14485 addons.go:70] Setting ingress=true in profile "addons-734405"
	I1221 19:47:09.027085   14485 addons.go:239] Setting addon ingress=true in "addons-734405"
	I1221 19:47:09.027092   14485 addons.go:70] Setting gcp-auth=true in profile "addons-734405"
	I1221 19:47:09.027115   14485 mustload.go:66] Loading cluster: addons-734405
	I1221 19:47:09.027120   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.027174   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027317   14485 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:47:09.027378   14485 addons.go:70] Setting storage-provisioner=true in profile "addons-734405"
	I1221 19:47:09.027395   14485 addons.go:239] Setting addon storage-provisioner=true in "addons-734405"
	I1221 19:47:09.027419   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.027545   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027580   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027710   14485 addons.go:70] Setting cloud-spanner=true in profile "addons-734405"
	I1221 19:47:09.027757   14485 addons.go:239] Setting addon cloud-spanner=true in "addons-734405"
	I1221 19:47:09.027795   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.026720   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027887   14485 addons.go:70] Setting ingress-dns=true in profile "addons-734405"
	I1221 19:47:09.028388   14485 addons.go:239] Setting addon ingress-dns=true in "addons-734405"
	I1221 19:47:09.028434   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.028922   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.026177   14485 addons.go:70] Setting registry-creds=true in profile "addons-734405"
	I1221 19:47:09.029786   14485 addons.go:239] Setting addon registry-creds=true in "addons-734405"
	I1221 19:47:09.029815   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.030291   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.027958   14485 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-734405"
	I1221 19:47:09.030626   14485 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-734405"
	I1221 19:47:09.030654   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.033257   14485 out.go:179] * Verifying Kubernetes components...
	I1221 19:47:09.034515   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.034713   14485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 19:47:09.027972   14485 addons.go:70] Setting volumesnapshots=true in profile "addons-734405"
	I1221 19:47:09.034983   14485 addons.go:239] Setting addon volumesnapshots=true in "addons-734405"
	I1221 19:47:09.035014   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.028071   14485 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-734405"
	I1221 19:47:09.035349   14485 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-734405"
	I1221 19:47:09.035382   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.028108   14485 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-734405"
	I1221 19:47:09.035426   14485 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-734405"
	I1221 19:47:09.035454   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.035515   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.028129   14485 addons.go:70] Setting registry=true in profile "addons-734405"
	I1221 19:47:09.036260   14485 addons.go:239] Setting addon registry=true in "addons-734405"
	I1221 19:47:09.036290   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.036747   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039561   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039596   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039811   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.039844   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.081833   14485 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1221 19:47:09.083247   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1221 19:47:09.083285   14485 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1221 19:47:09.083365   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.100008   14485 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-734405"
	I1221 19:47:09.101560   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.101486   14485 addons.go:239] Setting addon default-storageclass=true in "addons-734405"
	I1221 19:47:09.101893   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.102265   14485 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1221 19:47:09.105361   14485 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1221 19:47:09.106067   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.106165   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:09.106607   14485 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1221 19:47:09.106622   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1221 19:47:09.106666   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.109156   14485 out.go:179]   - Using image docker.io/registry:3.0.0
	I1221 19:47:09.109497   14485 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1221 19:47:09.113091   14485 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1221 19:47:09.113110   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1221 19:47:09.113162   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.113411   14485 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1221 19:47:09.113421   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1221 19:47:09.113463   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	W1221 19:47:09.127914   14485 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1221 19:47:09.130075   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1221 19:47:09.131512   14485 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1221 19:47:09.131660   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1221 19:47:09.131681   14485 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1221 19:47:09.131681   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:09.131751   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.133115   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1221 19:47:09.133134   14485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1221 19:47:09.133203   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.140927   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:09.143465   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1221 19:47:09.144575   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:09.145923   14485 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 19:47:09.145944   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1221 19:47:09.146008   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.151357   14485 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1221 19:47:09.151510   14485 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1221 19:47:09.151552   14485 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1221 19:47:09.152506   14485 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1221 19:47:09.152525   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1221 19:47:09.152582   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.151605   14485 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 19:47:09.153321   14485 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 19:47:09.153344   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1221 19:47:09.153395   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.154362   14485 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 19:47:09.154382   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 19:47:09.154428   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.154363   14485 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 19:47:09.154460   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1221 19:47:09.154535   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.180412   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.181523   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.184899   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1221 19:47:09.185498   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.187054   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1221 19:47:09.187076   14485 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1221 19:47:09.188399   14485 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1221 19:47:09.188428   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1221 19:47:09.188489   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.190886   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1221 19:47:09.198405   14485 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 19:47:09.198426   14485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 19:47:09.198525   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.202381   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.205104   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1221 19:47:09.205703   14485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 19:47:09.206806   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.207467   14485 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1221 19:47:09.207520   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1221 19:47:09.208715   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1221 19:47:09.208733   14485 out.go:179]   - Using image docker.io/busybox:stable
	I1221 19:47:09.209926   14485 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 19:47:09.212280   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1221 19:47:09.212364   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.212244   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1221 19:47:09.214046   14485 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1221 19:47:09.215097   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.216902   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1221 19:47:09.216946   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1221 19:47:09.217020   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:09.218373   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.219351   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.222441   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.223581   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.235387   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.235660   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.257708   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.258412   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	W1221 19:47:09.260515   14485 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1221 19:47:09.260568   14485 retry.go:84] will retry after 200ms: ssh: handshake failed: EOF
	I1221 19:47:09.263847   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:09.282479   14485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 19:47:09.366852   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1221 19:47:09.367935   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1221 19:47:09.369449   14485 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1221 19:47:09.369469   14485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1221 19:47:09.374358   14485 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1221 19:47:09.374376   14485 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1221 19:47:09.374543   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1221 19:47:09.374569   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1221 19:47:09.392267   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1221 19:47:09.392294   14485 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1221 19:47:09.395646   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 19:47:09.398314   14485 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1221 19:47:09.398333   14485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1221 19:47:09.401486   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1221 19:47:09.403090   14485 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1221 19:47:09.403113   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1221 19:47:09.417422   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 19:47:09.419890   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 19:47:09.419900   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 19:47:09.423934   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1221 19:47:09.424204   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1221 19:47:09.424218   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1221 19:47:09.427439   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1221 19:47:09.427456   14485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1221 19:47:09.428273   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1221 19:47:09.428287   14485 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1221 19:47:09.429043   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 19:47:09.454500   14485 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1221 19:47:09.454587   14485 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1221 19:47:09.461202   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1221 19:47:09.472327   14485 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 19:47:09.472407   14485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1221 19:47:09.479159   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1221 19:47:09.479182   14485 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1221 19:47:09.488790   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1221 19:47:09.488813   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1221 19:47:09.507239   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1221 19:47:09.507265   14485 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1221 19:47:09.527055   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 19:47:09.541991   14485 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 19:47:09.542086   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1221 19:47:09.541991   14485 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1221 19:47:09.542202   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1221 19:47:09.545522   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1221 19:47:09.545597   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1221 19:47:09.576525   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 19:47:09.580323   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1221 19:47:09.588699   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1221 19:47:09.588730   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1221 19:47:09.645444   14485 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1221 19:47:09.648969   14485 node_ready.go:35] waiting up to 6m0s for node "addons-734405" to be "Ready" ...
	I1221 19:47:09.656402   14485 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1221 19:47:09.656427   14485 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1221 19:47:09.662957   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 19:47:09.762624   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1221 19:47:09.762708   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1221 19:47:09.790761   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1221 19:47:09.790792   14485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1221 19:47:09.884200   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1221 19:47:09.884335   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1221 19:47:09.923310   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1221 19:47:09.923332   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1221 19:47:09.985482   14485 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 19:47:09.985511   14485 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1221 19:47:10.048613   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 19:47:10.163092   14485 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-734405" context rescaled to 1 replicas
	I1221 19:47:10.499832   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.132918101s)
	I1221 19:47:10.499942   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.131984382s)
	I1221 19:47:10.500011   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.104341051s)
	I1221 19:47:10.500053   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.09854521s)
	I1221 19:47:10.500097   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.082653762s)
	I1221 19:47:10.500174   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08025754s)
	I1221 19:47:10.757009   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.33708412s)
	I1221 19:47:10.757049   14485 addons.go:495] Verifying addon ingress=true in "addons-734405"
	I1221 19:47:10.757087   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.333122076s)
	I1221 19:47:10.757154   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.328066813s)
	I1221 19:47:10.757285   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.296049769s)
	I1221 19:47:10.757315   14485 addons.go:495] Verifying addon registry=true in "addons-734405"
	I1221 19:47:10.757350   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.230263728s)
	I1221 19:47:10.757374   14485 addons.go:495] Verifying addon metrics-server=true in "addons-734405"
	I1221 19:47:10.768035   14485 out.go:179] * Verifying ingress addon...
	I1221 19:47:10.768304   14485 out.go:179] * Verifying registry addon...
	I1221 19:47:10.771095   14485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1221 19:47:10.771111   14485 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1221 19:47:10.774128   14485 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 19:47:10.774146   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:10.774608   14485 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1221 19:47:10.774627   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:11.183158   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.606547368s)
	I1221 19:47:11.183235   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.602865426s)
	W1221 19:47:11.183259   14485 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 19:47:11.183273   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.520294375s)
	I1221 19:47:11.183536   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.134882072s)
	I1221 19:47:11.183569   14485 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-734405"
	I1221 19:47:11.185339   14485 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-734405 service yakd-dashboard -n yakd-dashboard
	
	I1221 19:47:11.185346   14485 out.go:179] * Verifying csi-hostpath-driver addon...
	I1221 19:47:11.188012   14485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1221 19:47:11.190652   14485 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 19:47:11.190671   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:11.291378   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:11.291539   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:11.430939   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1221 19:47:11.656708   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:11.691053   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:11.791866   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:11.791982   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:12.190817   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:12.292019   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:12.292213   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:12.690863   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:12.791980   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:12.792137   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:13.190532   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:13.273658   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:13.273825   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:13.691147   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:13.791798   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:13.791956   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:13.893214   14485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.462219685s)
	W1221 19:47:14.150895   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:14.191453   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:14.274010   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:14.274313   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:14.691010   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:14.791334   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:14.791529   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:15.191323   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:15.273508   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:15.273593   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:15.690760   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:15.791346   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:15.791519   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1221 19:47:16.151796   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:16.191068   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:16.274236   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:16.274357   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:16.690888   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:16.745755   14485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1221 19:47:16.745829   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:16.763211   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:16.774357   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:16.774382   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:16.864436   14485 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1221 19:47:16.876236   14485 addons.go:239] Setting addon gcp-auth=true in "addons-734405"
	I1221 19:47:16.876279   14485 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:47:16.876615   14485 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:47:16.893629   14485 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1221 19:47:16.893708   14485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:47:16.911071   14485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:47:17.004249   14485 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1221 19:47:17.005338   14485 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1221 19:47:17.006332   14485 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1221 19:47:17.006344   14485 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1221 19:47:17.018485   14485 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1221 19:47:17.018505   14485 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1221 19:47:17.030049   14485 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 19:47:17.030069   14485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1221 19:47:17.041455   14485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 19:47:17.191274   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:17.275208   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:17.275450   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:17.322944   14485 addons.go:495] Verifying addon gcp-auth=true in "addons-734405"
	I1221 19:47:17.324242   14485 out.go:179] * Verifying gcp-auth addon...
	I1221 19:47:17.325926   14485 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1221 19:47:17.375521   14485 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1221 19:47:17.375543   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:17.691139   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:17.774529   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:17.774591   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:17.829199   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:18.190775   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:18.274051   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:18.274191   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:18.329047   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1221 19:47:18.651524   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:18.690937   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:18.774315   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:18.774416   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:18.828394   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:19.198580   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:19.273662   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:19.273787   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:19.328981   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:19.690637   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:19.774017   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:19.774154   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:19.827959   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:20.190791   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:20.274289   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:20.274452   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:20.328683   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1221 19:47:20.652213   14485 node_ready.go:57] node "addons-734405" has "Ready":"False" status (will retry)
	I1221 19:47:20.690544   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:20.773727   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:20.773890   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:20.828761   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:21.190169   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:21.273358   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:21.273540   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:21.328802   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:21.690607   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:21.773739   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:21.773895   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:21.832814   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:22.152935   14485 node_ready.go:49] node "addons-734405" is "Ready"
	I1221 19:47:22.152974   14485 node_ready.go:38] duration metric: took 12.503966942s for node "addons-734405" to be "Ready" ...
	I1221 19:47:22.152992   14485 api_server.go:52] waiting for apiserver process to appear ...
	I1221 19:47:22.153048   14485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 19:47:22.172049   14485 api_server.go:72] duration metric: took 13.146096968s to wait for apiserver process to appear ...
	I1221 19:47:22.172080   14485 api_server.go:88] waiting for apiserver healthz status ...
	I1221 19:47:22.172103   14485 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1221 19:47:22.178258   14485 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1221 19:47:22.179266   14485 api_server.go:141] control plane version: v1.34.3
	I1221 19:47:22.179296   14485 api_server.go:131] duration metric: took 7.210074ms to wait for apiserver health ...
	I1221 19:47:22.179306   14485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 19:47:22.185672   14485 system_pods.go:59] 20 kube-system pods found
	I1221 19:47:22.185712   14485 system_pods.go:61] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.185724   14485 system_pods.go:61] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:22.185735   14485 system_pods.go:61] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.185744   14485 system_pods.go:61] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.185754   14485 system_pods.go:61] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.185762   14485 system_pods.go:61] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.185768   14485 system_pods.go:61] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.185774   14485 system_pods.go:61] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.185780   14485 system_pods.go:61] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.185800   14485 system_pods.go:61] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.185814   14485 system_pods.go:61] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.185819   14485 system_pods.go:61] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.185826   14485 system_pods.go:61] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.185841   14485 system_pods.go:61] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.185848   14485 system_pods.go:61] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.185905   14485 system_pods.go:61] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.185917   14485 system_pods.go:61] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.185928   14485 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.185936   14485 system_pods.go:61] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.185943   14485 system_pods.go:61] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:22.185951   14485 system_pods.go:74] duration metric: took 6.638287ms to wait for pod list to return data ...
	I1221 19:47:22.185960   14485 default_sa.go:34] waiting for default service account to be created ...
	I1221 19:47:22.188683   14485 default_sa.go:45] found service account: "default"
	I1221 19:47:22.188707   14485 default_sa.go:55] duration metric: took 2.740424ms for default service account to be created ...
	I1221 19:47:22.188717   14485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 19:47:22.283707   14485 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 19:47:22.283734   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:22.283869   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:22.283896   14485 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 19:47:22.283910   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:22.285986   14485 system_pods.go:86] 20 kube-system pods found
	I1221 19:47:22.286018   14485 system_pods.go:89] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.286028   14485 system_pods.go:89] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:22.286057   14485 system_pods.go:89] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.286062   14485 system_pods.go:89] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.286069   14485 system_pods.go:89] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.286077   14485 system_pods.go:89] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.286083   14485 system_pods.go:89] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.286087   14485 system_pods.go:89] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.286095   14485 system_pods.go:89] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.286104   14485 system_pods.go:89] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.286128   14485 system_pods.go:89] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.286135   14485 system_pods.go:89] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.286140   14485 system_pods.go:89] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.286146   14485 system_pods.go:89] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.286160   14485 system_pods.go:89] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.286165   14485 system_pods.go:89] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.286172   14485 system_pods.go:89] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.286177   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.286185   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.286201   14485 system_pods.go:89] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:22.286235   14485 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1221 19:47:22.383618   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:22.486472   14485 system_pods.go:86] 20 kube-system pods found
	I1221 19:47:22.486565   14485 system_pods.go:89] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.486579   14485 system_pods.go:89] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 19:47:22.486591   14485 system_pods.go:89] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.486599   14485 system_pods.go:89] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.486607   14485 system_pods.go:89] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.486628   14485 system_pods.go:89] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.486637   14485 system_pods.go:89] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.486643   14485 system_pods.go:89] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.486648   14485 system_pods.go:89] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.486655   14485 system_pods.go:89] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.486660   14485 system_pods.go:89] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.486665   14485 system_pods.go:89] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.486680   14485 system_pods.go:89] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.486688   14485 system_pods.go:89] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.486695   14485 system_pods.go:89] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.486711   14485 system_pods.go:89] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.486718   14485 system_pods.go:89] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.486726   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.486733   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.486741   14485 system_pods.go:89] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 19:47:22.692687   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:22.765459   14485 system_pods.go:86] 20 kube-system pods found
	I1221 19:47:22.765506   14485 system_pods.go:89] "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1221 19:47:22.765515   14485 system_pods.go:89] "coredns-66bc5c9577-wq5c4" [0d603bcf-6860-49dd-a4e0-6e29d057bd3b] Running
	I1221 19:47:22.765526   14485 system_pods.go:89] "csi-hostpath-attacher-0" [92df6883-ffee-4ab9-8ad0-896da35173b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1221 19:47:22.765539   14485 system_pods.go:89] "csi-hostpath-resizer-0" [becb6b7d-5f8b-4406-9344-98ec8add7989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1221 19:47:22.765553   14485 system_pods.go:89] "csi-hostpathplugin-9tblq" [d51177a8-f616-49ca-9d97-5f0337e4efbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1221 19:47:22.765562   14485 system_pods.go:89] "etcd-addons-734405" [0136be8d-f83b-4a34-87bd-b8a4e071aaa9] Running
	I1221 19:47:22.765568   14485 system_pods.go:89] "kindnet-z9kv6" [fd1416f9-d2c1-474c-8655-9e36238e04a8] Running
	I1221 19:47:22.765576   14485 system_pods.go:89] "kube-apiserver-addons-734405" [af68280f-387d-4148-978b-47ff4889e621] Running
	I1221 19:47:22.765582   14485 system_pods.go:89] "kube-controller-manager-addons-734405" [c3cad378-71d9-4b03-8cad-0be7bfc855cc] Running
	I1221 19:47:22.765594   14485 system_pods.go:89] "kube-ingress-dns-minikube" [7a09385d-10d0-4077-b59d-11a7c22481eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1221 19:47:22.765604   14485 system_pods.go:89] "kube-proxy-w42q9" [e18c35e5-f56c-4193-881a-7f2c558aa963] Running
	I1221 19:47:22.765610   14485 system_pods.go:89] "kube-scheduler-addons-734405" [95470548-a252-4c1d-9359-e8f08da8f53a] Running
	I1221 19:47:22.765622   14485 system_pods.go:89] "metrics-server-85b7d694d7-gzztd" [6bb93449-d194-4309-ba2f-972b275b8b34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1221 19:47:22.765630   14485 system_pods.go:89] "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1221 19:47:22.765642   14485 system_pods.go:89] "registry-6b586f9694-5p6mn" [cf862c70-5d5a-40f3-8e11-59ffaa2aad95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1221 19:47:22.765656   14485 system_pods.go:89] "registry-creds-764b6fb674-8smmr" [45150a37-5dac-4f62-a0c4-4044a717c870] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1221 19:47:22.765668   14485 system_pods.go:89] "registry-proxy-5xdvv" [5a7db08e-cdae-489d-a002-680422c11f70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1221 19:47:22.765677   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fn24t" [913e525f-d3c7-4179-a14c-9c531ece62a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.765685   14485 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w6gfv" [3e6783df-3cda-44c5-8701-b7c55a99095a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1221 19:47:22.765691   14485 system_pods.go:89] "storage-provisioner" [862f1bb2-81ec-4655-944e-76f7b57ea0fc] Running
	I1221 19:47:22.765701   14485 system_pods.go:126] duration metric: took 576.976739ms to wait for k8s-apps to be running ...
	I1221 19:47:22.765710   14485 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 19:47:22.765764   14485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 19:47:22.774949   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:22.775153   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:22.783001   14485 system_svc.go:56] duration metric: took 17.284969ms WaitForService to wait for kubelet
	I1221 19:47:22.783026   14485 kubeadm.go:587] duration metric: took 13.75707992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 19:47:22.783049   14485 node_conditions.go:102] verifying NodePressure condition ...
	I1221 19:47:22.785596   14485 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 19:47:22.785623   14485 node_conditions.go:123] node cpu capacity is 8
	I1221 19:47:22.785644   14485 node_conditions.go:105] duration metric: took 2.589014ms to run NodePressure ...
	I1221 19:47:22.785657   14485 start.go:242] waiting for startup goroutines ...
	I1221 19:47:22.829399   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:23.192018   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:23.274621   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:23.274772   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:23.329512   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:23.692074   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:23.774435   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:23.774536   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:23.828363   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:24.191629   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:24.291960   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:24.292072   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:24.329812   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:24.690784   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:24.774688   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:24.774710   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:24.828965   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:25.191291   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:25.292415   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:25.292450   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:25.328448   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:25.692880   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:25.774670   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:25.774682   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:25.829123   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:26.191155   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:26.275375   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:26.275489   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:26.329025   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:26.691098   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:26.775018   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:26.775058   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:26.829667   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:27.192073   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:27.274637   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:27.274848   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:27.329610   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:27.691625   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:27.774104   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:27.774116   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:27.828933   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:28.191287   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:28.275279   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:28.275332   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:28.328723   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:28.691899   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:28.774591   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:28.774834   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:28.829313   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:29.191452   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:29.291413   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:29.291550   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:29.391645   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:29.695824   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:29.774576   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:29.774603   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:29.828575   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:30.192219   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:30.276331   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:30.276530   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:30.330172   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:30.693305   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:30.776600   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:30.777083   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:30.829545   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.191982   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:31.274946   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:31.275344   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:31.329318   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.739158   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:31.876011   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:31.876433   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:31.876974   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.191301   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:32.291895   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.291930   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:32.392955   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:32.691585   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:32.774394   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:32.774426   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:32.829253   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:33.191301   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:33.273984   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:33.274130   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:33.329277   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:33.691740   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:33.774642   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:33.774663   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:33.829105   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:34.191321   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:34.291847   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:34.291884   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:34.330631   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:34.692201   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:34.775308   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:34.775475   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:34.829093   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:35.191670   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:35.291992   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:35.292122   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:35.329367   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:35.691707   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:35.774142   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:35.774196   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:35.828051   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:36.191286   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:36.274821   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:36.274963   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:36.329591   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:36.693964   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:36.775880   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:36.776381   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:36.829537   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:37.191980   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:37.274738   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:37.274838   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:37.329913   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:37.692674   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:37.793062   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:37.793126   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:37.829764   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:38.192522   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:38.274202   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:38.274282   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:38.329509   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:38.692078   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:38.792751   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:38.793018   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:38.828939   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:39.191439   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:39.275306   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:39.277167   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:39.328964   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:39.691148   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:39.775085   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:39.775176   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:39.828981   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:40.191520   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:40.274332   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:40.274611   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:40.329287   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:40.691658   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:40.774007   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:40.774216   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:40.829215   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:41.191251   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:41.274443   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:41.274475   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:41.329107   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:41.691443   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:41.791437   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:41.791533   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:41.828458   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:42.193080   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:42.274251   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:42.274394   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:42.328599   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:42.691909   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:42.774761   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:42.775196   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:42.828994   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:43.191330   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:43.273801   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:43.273887   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:43.329988   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:43.691311   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:43.774089   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:43.774126   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:43.829686   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:44.192218   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:44.274538   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:44.274672   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:44.329278   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:44.691352   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:44.774214   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:44.774278   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:44.829771   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:45.191917   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:45.333160   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:45.333167   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:45.485984   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:45.756190   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:45.774393   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:45.774410   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:45.828995   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:46.191045   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:46.274412   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 19:47:46.274585   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:46.329175   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:46.692763   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:46.774348   14485 kapi.go:107] duration metric: took 36.003250091s to wait for kubernetes.io/minikube-addons=registry ...
	I1221 19:47:46.774982   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:46.829433   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:47.191921   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:47.274762   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:47.329211   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:47.691736   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:47.774304   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:47.828904   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:48.190773   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:48.274110   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:48.373191   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:48.690939   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:48.775333   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:48.830073   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:49.191261   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:49.273789   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:49.329651   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:49.691709   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:49.773972   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:49.829552   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:50.192270   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:50.292367   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:50.328793   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 19:47:50.693046   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:50.793698   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:50.828969   14485 kapi.go:107] duration metric: took 33.503038969s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1221 19:47:50.830434   14485 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-734405 cluster.
	I1221 19:47:50.831497   14485 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1221 19:47:50.832585   14485 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1221 19:47:51.192059   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:51.274628   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:51.692528   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:51.774306   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:52.191516   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:52.273977   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:52.691724   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:52.791858   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:53.191772   14485 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 19:47:53.274371   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:53.690848   14485 kapi.go:107] duration metric: took 42.502832919s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1221 19:47:53.774178   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:54.275460   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:54.775892   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:55.274655   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:55.801196   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:56.274663   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:56.775134   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:57.274879   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:57.773904   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:58.274857   14485 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 19:47:58.774000   14485 kapi.go:107] duration metric: took 48.002886953s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1221 19:47:58.775398   14485 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, ingress-dns, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, storage-provisioner-rancher, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1221 19:47:58.776573   14485 addons.go:530] duration metric: took 49.750592938s for enable addons: enabled=[inspektor-gadget registry-creds ingress-dns amd-gpu-device-plugin nvidia-device-plugin storage-provisioner cloud-spanner metrics-server storage-provisioner-rancher yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1221 19:47:58.776619   14485 start.go:247] waiting for cluster config update ...
	I1221 19:47:58.776643   14485 start.go:256] writing updated cluster config ...
	I1221 19:47:58.776951   14485 ssh_runner.go:195] Run: rm -f paused
	I1221 19:47:58.780731   14485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 19:47:58.783327   14485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wq5c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.786850   14485 pod_ready.go:94] pod "coredns-66bc5c9577-wq5c4" is "Ready"
	I1221 19:47:58.786871   14485 pod_ready.go:86] duration metric: took 3.525494ms for pod "coredns-66bc5c9577-wq5c4" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.788511   14485 pod_ready.go:83] waiting for pod "etcd-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.791747   14485 pod_ready.go:94] pod "etcd-addons-734405" is "Ready"
	I1221 19:47:58.791764   14485 pod_ready.go:86] duration metric: took 3.233235ms for pod "etcd-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.793339   14485 pod_ready.go:83] waiting for pod "kube-apiserver-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.796295   14485 pod_ready.go:94] pod "kube-apiserver-addons-734405" is "Ready"
	I1221 19:47:58.796315   14485 pod_ready.go:86] duration metric: took 2.956894ms for pod "kube-apiserver-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:58.797789   14485 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.185268   14485 pod_ready.go:94] pod "kube-controller-manager-addons-734405" is "Ready"
	I1221 19:47:59.185302   14485 pod_ready.go:86] duration metric: took 387.49367ms for pod "kube-controller-manager-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.383895   14485 pod_ready.go:83] waiting for pod "kube-proxy-w42q9" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.784456   14485 pod_ready.go:94] pod "kube-proxy-w42q9" is "Ready"
	I1221 19:47:59.784482   14485 pod_ready.go:86] duration metric: took 400.557638ms for pod "kube-proxy-w42q9" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:47:59.984252   14485 pod_ready.go:83] waiting for pod "kube-scheduler-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:00.384141   14485 pod_ready.go:94] pod "kube-scheduler-addons-734405" is "Ready"
	I1221 19:48:00.384174   14485 pod_ready.go:86] duration metric: took 399.891025ms for pod "kube-scheduler-addons-734405" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 19:48:00.384189   14485 pod_ready.go:40] duration metric: took 1.603427829s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 19:48:00.427073   14485 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 19:48:00.428707   14485 out.go:179] * Done! kubectl is now configured to use "addons-734405" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 19:47:57 addons-734405 crio[774]: time="2025-12-21T19:47:57.796029218Z" level=info msg="Starting container: d1cc7252170adbdf6fc0c3d572b28ac7d17b455a2210c3bee455050aa96788b8" id=bff1b3d2-8be2-405b-a2b8-a0e4451c259c name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 19:47:57 addons-734405 crio[774]: time="2025-12-21T19:47:57.797733953Z" level=info msg="Started container" PID=6030 containerID=d1cc7252170adbdf6fc0c3d572b28ac7d17b455a2210c3bee455050aa96788b8 description=ingress-nginx/ingress-nginx-controller-85d4c799dd-dmwnv/controller id=bff1b3d2-8be2-405b-a2b8-a0e4451c259c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d239d101dcfd8769d758c3c2273c1aec83e1213267e18203eac23978cfd879ad
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.235443051Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3319769f-0c66-4422-ac2f-c0aabf357a73 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.235500889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.240963148Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:239d5ab16a4a7d02fc5119a60b1d15c184e4ac1431a36373f07ff9f58b2dab69 UID:a5a9677e-ccdd-4fb3-ad46-086786f62164 NetNS:/var/run/netns/82071af3-69e9-462a-b878-0f42960a70b0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540920}] Aliases:map[]}"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.240989022Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.250266343Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:239d5ab16a4a7d02fc5119a60b1d15c184e4ac1431a36373f07ff9f58b2dab69 UID:a5a9677e-ccdd-4fb3-ad46-086786f62164 NetNS:/var/run/netns/82071af3-69e9-462a-b878-0f42960a70b0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000540920}] Aliases:map[]}"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.250387288Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.251074305Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.251801182Z" level=info msg="Ran pod sandbox 239d5ab16a4a7d02fc5119a60b1d15c184e4ac1431a36373f07ff9f58b2dab69 with infra container: default/busybox/POD" id=3319769f-0c66-4422-ac2f-c0aabf357a73 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.25288727Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd854e23-45da-41eb-ae44-52e538dd3111 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.253008222Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dd854e23-45da-41eb-ae44-52e538dd3111 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.253046667Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dd854e23-45da-41eb-ae44-52e538dd3111 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.253638723Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9610b6ac-dee1-4bb5-896f-46c5368f849a name=/runtime.v1.ImageService/PullImage
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.255008127Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.825942391Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9610b6ac-dee1-4bb5-896f-46c5368f849a name=/runtime.v1.ImageService/PullImage
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.826526971Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d39f0c76-31d7-488f-a4e1-6717f603951d name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.827922243Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=83377581-66e0-4b64-8742-f803d8c8c521 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.831149436Z" level=info msg="Creating container: default/busybox/busybox" id=2711a06b-f6d6-4ac7-bf0c-7cfa0501abf6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.831301878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.836317799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.836756268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.866561568Z" level=info msg="Created container 78e218eaf8072a01be4625e70b9ee9831607acde1d3956e01f1a08acba06ac78: default/busybox/busybox" id=2711a06b-f6d6-4ac7-bf0c-7cfa0501abf6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.867059821Z" level=info msg="Starting container: 78e218eaf8072a01be4625e70b9ee9831607acde1d3956e01f1a08acba06ac78" id=7f01c75c-c4a6-4a3e-8530-9e2cd38341f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 19:48:01 addons-734405 crio[774]: time="2025-12-21T19:48:01.868738786Z" level=info msg="Started container" PID=6412 containerID=78e218eaf8072a01be4625e70b9ee9831607acde1d3956e01f1a08acba06ac78 description=default/busybox/busybox id=7f01c75c-c4a6-4a3e-8530-9e2cd38341f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=239d5ab16a4a7d02fc5119a60b1d15c184e4ac1431a36373f07ff9f58b2dab69
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	78e218eaf8072       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   239d5ab16a4a7       busybox                                     default
	d1cc7252170ad       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             12 seconds ago       Running             controller                               0                   d239d101dcfd8       ingress-nginx-controller-85d4c799dd-dmwnv   ingress-nginx
	8193c5ae3e9a0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          17 seconds ago       Running             csi-snapshotter                          0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	676b24cbeac1b       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 seconds ago       Running             csi-provisioner                          0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	ae4f670583b4b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            19 seconds ago       Running             liveness-probe                           0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	4da1a1c1615a1       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           19 seconds ago       Running             hostpath                                 0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	83cd8b34dd2bc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                20 seconds ago       Running             node-driver-registrar                    0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	cc1211cf07843       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 21 seconds ago       Running             gcp-auth                                 0                   8ef659878b618       gcp-auth-78565c9fb4-f5n74                   gcp-auth
	931b6bedd64cc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            22 seconds ago       Running             gadget                                   0                   69137a0adeada       gadget-lvc5c                                gadget
	9154e33c67350       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   9a93df1c76584       registry-proxy-5xdvv                        kube-system
	ec5953c7bde6a       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              26 seconds ago       Running             yakd                                     0                   3ba81922d56f6       yakd-dashboard-6654c87f9b-lz7ml             yakd-dashboard
	091d53cfa2f7b       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             27 seconds ago       Exited              patch                                    2                   a85101bc1cf84       ingress-nginx-admission-patch-gp4pn         ingress-nginx
	2cb7999baafc4       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             28 seconds ago       Exited              patch                                    2                   3bdea63eea4ba       gcp-auth-certs-patch-lrb4r                  gcp-auth
	fc4218afd9e59       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      28 seconds ago       Running             volume-snapshot-controller               0                   ff7dc378427a8       snapshot-controller-7d9fbc56b8-w6gfv        kube-system
	2c76399e64dc0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   29 seconds ago       Exited              create                                   0                   267ef654d5296       ingress-nginx-admission-create-r2l6g        ingress-nginx
	d37800c5570f8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   26e3f02e5e27d       amd-gpu-device-plugin-s628b                 kube-system
	5acc717deb7f9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      30 seconds ago       Running             volume-snapshot-controller               0                   b9eb8b9eca650       snapshot-controller-7d9fbc56b8-fn24t        kube-system
	737d21aac5c57       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             31 seconds ago       Running             csi-attacher                             0                   52dc869c667e0       csi-hostpath-attacher-0                     kube-system
	749cd4daccd50       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   31 seconds ago       Running             csi-external-health-monitor-controller   0                   6cba746447ac3       csi-hostpathplugin-9tblq                    kube-system
	c99f35ca87dcf       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     32 seconds ago       Running             nvidia-device-plugin-ctr                 0                   9355ac06d6fab       nvidia-device-plugin-daemonset-jlq7q        kube-system
	fb8a056607469       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   35 seconds ago       Exited              create                                   0                   8bdf03fd2317d       gcp-auth-certs-create-wcjkh                 gcp-auth
	33aa662cb1f0b       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              35 seconds ago       Running             csi-resizer                              0                   0c1a8c749af66       csi-hostpath-resizer-0                      kube-system
	d7348a5e060fd       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               36 seconds ago       Running             minikube-ingress-dns                     0                   93449b6dc50bc       kube-ingress-dns-minikube                   kube-system
	5afbb455983c1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             42 seconds ago       Running             local-path-provisioner                   0                   3929024e9b89b       local-path-provisioner-648f6765c9-csc7x     local-path-storage
	8a26f7364135b       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               43 seconds ago       Running             cloud-spanner-emulator                   0                   1d5f3fc2299bb       cloud-spanner-emulator-85df47b6f4-ltblw     default
	abf23e714a098       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           45 seconds ago       Running             registry                                 0                   7a298569ec09f       registry-6b586f9694-5p6mn                   kube-system
	54e47bcdd2cec       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        47 seconds ago       Running             metrics-server                           0                   7d0f5f70d0808       metrics-server-85b7d694d7-gzztd             kube-system
	d6093c1a7f9f6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago       Running             coredns                                  0                   8dfe3a2ba5b1a       coredns-66bc5c9577-wq5c4                    kube-system
	23a6a681dd961       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   207b293bc67d7       storage-provisioner                         kube-system
	e631c821d8606       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           59 seconds ago       Running             kindnet-cni                              0                   6541b47b9f4b3       kindnet-z9kv6                               kube-system
	026bbd1e79a4d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             About a minute ago   Running             kube-proxy                               0                   7008e56743e7d       kube-proxy-w42q9                            kube-system
	e8e92c3f6bb0c       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             About a minute ago   Running             kube-scheduler                           0                   a9f4091a0c657       kube-scheduler-addons-734405                kube-system
	8989e50092359       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             About a minute ago   Running             kube-controller-manager                  0                   0e0d1e454a339       kube-controller-manager-addons-734405       kube-system
	5cbca605ea4a5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             About a minute ago   Running             kube-apiserver                           0                   3b53eb1af2ac4       kube-apiserver-addons-734405                kube-system
	a790cf4635e7c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   c8ef5c03af77c       etcd-addons-734405                          kube-system
	
	
	==> coredns [d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2] <==
	[INFO] 10.244.0.11:35445 - 16746 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128494s
	[INFO] 10.244.0.11:56025 - 61124 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106941s
	[INFO] 10.244.0.11:56025 - 61380 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135115s
	[INFO] 10.244.0.11:46209 - 52351 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000090309s
	[INFO] 10.244.0.11:46209 - 52139 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000132303s
	[INFO] 10.244.0.11:44461 - 53364 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000067834s
	[INFO] 10.244.0.11:44461 - 53088 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000106952s
	[INFO] 10.244.0.11:43127 - 23395 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000055084s
	[INFO] 10.244.0.11:43127 - 23194 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000056855s
	[INFO] 10.244.0.11:46475 - 32239 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114547s
	[INFO] 10.244.0.11:46475 - 31834 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015182s
	[INFO] 10.244.0.21:56646 - 9441 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000215087s
	[INFO] 10.244.0.21:51383 - 8343 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000268447s
	[INFO] 10.244.0.21:35532 - 26724 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156522s
	[INFO] 10.244.0.21:56609 - 51068 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000148392s
	[INFO] 10.244.0.21:59894 - 19248 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117858s
	[INFO] 10.244.0.21:54500 - 22055 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101893s
	[INFO] 10.244.0.21:48975 - 12677 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004871059s
	[INFO] 10.244.0.21:60240 - 37351 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005335674s
	[INFO] 10.244.0.21:56562 - 43127 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005155105s
	[INFO] 10.244.0.21:55771 - 16484 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005293981s
	[INFO] 10.244.0.21:57930 - 7089 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004541723s
	[INFO] 10.244.0.21:34421 - 50285 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004697359s
	[INFO] 10.244.0.21:40060 - 58720 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000797793s
	[INFO] 10.244.0.21:60413 - 33616 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001150479s
	
	
	==> describe nodes <==
	Name:               addons-734405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-734405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=addons-734405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T19_47_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-734405
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-734405"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 19:47:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-734405
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 19:48:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 19:48:04 +0000   Sun, 21 Dec 2025 19:46:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 19:48:04 +0000   Sun, 21 Dec 2025 19:46:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 19:48:04 +0000   Sun, 21 Dec 2025 19:46:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 19:48:04 +0000   Sun, 21 Dec 2025 19:47:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-734405
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                74e3dc80-d0bb-45e6-9642-dc97dff8bb7b
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-85df47b6f4-ltblw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gadget                      gadget-lvc5c                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gcp-auth                    gcp-auth-78565c9fb4-f5n74                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-dmwnv    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         60s
	  kube-system                 amd-gpu-device-plugin-s628b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-wq5c4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     61s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 csi-hostpathplugin-9tblq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-734405                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         68s
	  kube-system                 kindnet-z9kv6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      62s
	  kube-system                 kube-apiserver-addons-734405                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-controller-manager-addons-734405        200m (2%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-w42q9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-scheduler-addons-734405                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 metrics-server-85b7d694d7-gzztd              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         60s
	  kube-system                 nvidia-device-plugin-daemonset-jlq7q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-5p6mn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-creds-764b6fb674-8smmr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 registry-proxy-5xdvv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-fn24t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 snapshot-controller-7d9fbc56b8-w6gfv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  local-path-storage          local-path-provisioner-648f6765c9-csc7x      0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-lz7ml              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 61s   kube-proxy       
	  Normal  Starting                 67s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s   kubelet          Node addons-734405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s   kubelet          Node addons-734405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s   kubelet          Node addons-734405 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           63s   node-controller  Node addons-734405 event: Registered Node addons-734405 in Controller
	  Normal  NodeReady                49s   kubelet          Node addons-734405 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec21 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.079009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.372710] i8042: Warning: Keylock active
	[  +0.010874] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476893] block sda: the capability attribute has been deprecated.
	[  +0.085350] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025061] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.894686] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8] <==
	{"level":"info","ts":"2025-12-21T19:47:34.031399Z","caller":"traceutil/trace.go:172","msg":"trace[1645263604] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"124.795067ms","start":"2025-12-21T19:47:33.906587Z","end":"2025-12-21T19:47:34.031382Z","steps":["trace[1645263604] 'process raft request'  (duration: 124.666179ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:34.051726Z","caller":"traceutil/trace.go:172","msg":"trace[1074436700] transaction","detail":"{read_only:false; response_revision:1007; number_of_response:1; }","duration":"139.361329ms","start":"2025-12-21T19:47:33.912353Z","end":"2025-12-21T19:47:34.051714Z","steps":["trace[1074436700] 'process raft request'  (duration: 139.28643ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T19:47:37.876386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:37.883766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:37.895436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:37.902048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T19:47:45.484399Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.60286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:45.484432Z","caller":"traceutil/trace.go:172","msg":"trace[58265122] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"221.616683ms","start":"2025-12-21T19:47:45.262797Z","end":"2025-12-21T19:47:45.484414Z","steps":["trace[58265122] 'process raft request'  (duration: 158.815668ms)","trace[58265122] 'compare'  (duration: 62.706435ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T19:47:45.484468Z","caller":"traceutil/trace.go:172","msg":"trace[2021833767] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1104; }","duration":"156.689768ms","start":"2025-12-21T19:47:45.327765Z","end":"2025-12-21T19:47:45.484454Z","steps":["trace[2021833767] 'agreement among raft nodes before linearized reading'  (duration: 93.823611ms)","trace[2021833767] 'range keys from in-memory index tree'  (duration: 62.751705ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T19:47:45.578095Z","caller":"traceutil/trace.go:172","msg":"trace[845078922] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1127; }","duration":"156.50217ms","start":"2025-12-21T19:47:45.421571Z","end":"2025-12-21T19:47:45.578073Z","steps":["trace[845078922] 'read index received'  (duration: 156.494889ms)","trace[845078922] 'applied index is now lower than readState.Index'  (duration: 6.471µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.578196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.877517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"warn","ts":"2025-12-21T19:47:45.578210Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.595517ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:45.578257Z","caller":"traceutil/trace.go:172","msg":"trace[398374212] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1105; }","duration":"221.653683ms","start":"2025-12-21T19:47:45.356598Z","end":"2025-12-21T19:47:45.578251Z","steps":["trace[398374212] 'agreement among raft nodes before linearized reading'  (duration: 221.573883ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.578259Z","caller":"traceutil/trace.go:172","msg":"trace[2024770133] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1105; }","duration":"206.923164ms","start":"2025-12-21T19:47:45.371297Z","end":"2025-12-21T19:47:45.578220Z","steps":["trace[2024770133] 'agreement among raft nodes before linearized reading'  (duration: 206.795685ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.578322Z","caller":"traceutil/trace.go:172","msg":"trace[47854218] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"240.869293ms","start":"2025-12-21T19:47:45.337442Z","end":"2025-12-21T19:47:45.578312Z","steps":["trace[47854218] 'process raft request'  (duration: 240.759428ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.698456Z","caller":"traceutil/trace.go:172","msg":"trace[495229169] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"120.281094ms","start":"2025-12-21T19:47:45.578156Z","end":"2025-12-21T19:47:45.698437Z","steps":["trace[495229169] 'read index received'  (duration: 120.274591ms)","trace[495229169] 'applied index is now lower than readState.Index'  (duration: 5.27µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.748319Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.965733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T19:47:45.748375Z","caller":"traceutil/trace.go:172","msg":"trace[1384059515] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1106; }","duration":"312.030752ms","start":"2025-12-21T19:47:45.436333Z","end":"2025-12-21T19:47:45.748364Z","steps":["trace[1384059515] 'agreement among raft nodes before linearized reading'  (duration: 262.187251ms)","trace[1384059515] 'range keys from in-memory index tree'  (duration: 49.754993ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.748400Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T19:47:45.436316Z","time spent":"312.079127ms","remote":"127.0.0.1:39486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-21T19:47:45.748472Z","caller":"traceutil/trace.go:172","msg":"trace[553710565] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"259.084644ms","start":"2025-12-21T19:47:45.489370Z","end":"2025-12-21T19:47:45.748454Z","steps":["trace[553710565] 'process raft request'  (duration: 209.097207ms)","trace[553710565] 'compare'  (duration: 49.887204ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T19:47:45.754959Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.96993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-gp4pn\" limit:1 ","response":"range_response_count:1 size:4944"}
	{"level":"info","ts":"2025-12-21T19:47:45.755006Z","caller":"traceutil/trace.go:172","msg":"trace[1674152388] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-gp4pn; range_end:; response_count:1; response_revision:1107; }","duration":"146.02389ms","start":"2025-12-21T19:47:45.608971Z","end":"2025-12-21T19:47:45.754995Z","steps":["trace[1674152388] 'agreement among raft nodes before linearized reading'  (duration: 145.897545ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.755046Z","caller":"traceutil/trace.go:172","msg":"trace[769583568] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"128.142512ms","start":"2025-12-21T19:47:45.626892Z","end":"2025-12-21T19:47:45.755035Z","steps":["trace[769583568] 'process raft request'  (duration: 128.10879ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.755084Z","caller":"traceutil/trace.go:172","msg":"trace[2017445893] transaction","detail":"{read_only:false; response_revision:1109; number_of_response:1; }","duration":"172.935164ms","start":"2025-12-21T19:47:45.582135Z","end":"2025-12-21T19:47:45.755070Z","steps":["trace[2017445893] 'process raft request'  (duration: 172.829856ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T19:47:45.755097Z","caller":"traceutil/trace.go:172","msg":"trace[2117487475] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"173.654174ms","start":"2025-12-21T19:47:45.581430Z","end":"2025-12-21T19:47:45.755084Z","steps":["trace[2117487475] 'process raft request'  (duration: 173.448791ms)"],"step_count":1}
	
	
	==> gcp-auth [cc1211cf078437dc18f5b7b00cbb8a6afea2bfe1bc5def1261033d9805cf3fd7] <==
	2025/12/21 19:47:49 GCP Auth Webhook started!
	2025/12/21 19:48:00 Ready to marshal response ...
	2025/12/21 19:48:00 Ready to write response ...
	2025/12/21 19:48:00 Ready to marshal response ...
	2025/12/21 19:48:00 Ready to write response ...
	2025/12/21 19:48:00 Ready to marshal response ...
	2025/12/21 19:48:00 Ready to write response ...
	
	
	==> kernel <==
	 19:48:10 up 30 min,  0 user,  load average: 2.00, 0.79, 0.29
	Linux addons-734405 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f] <==
	I1221 19:47:11.472258       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 19:47:11.472529       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1221 19:47:11.472651       1 main.go:148] setting mtu 1500 for CNI 
	I1221 19:47:11.472674       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 19:47:11.472691       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T19:47:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 19:47:11.691126       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 19:47:11.691157       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 19:47:11.691170       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 19:47:11.768498       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 19:47:12.069954       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 19:47:12.069978       1 metrics.go:72] Registering metrics
	I1221 19:47:12.070036       1 controller.go:711] "Syncing nftables rules"
	I1221 19:47:21.692353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:47:21.692403       1 main.go:301] handling current node
	I1221 19:47:31.691338       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:47:31.691394       1 main.go:301] handling current node
	I1221 19:47:41.691640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:47:41.691698       1 main.go:301] handling current node
	I1221 19:47:51.691946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:47:51.692045       1 main.go:301] handling current node
	I1221 19:48:01.691516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1221 19:48:01.691571       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1221 19:47:25.516302       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.521629       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.542182       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.582919       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.664578       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:25.826072       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	E1221 19:47:26.146950       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.207.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.207.118:443: connect: connection refused" logger="UnhandledError"
	W1221 19:47:26.519004       1 handler_proxy.go:99] no RequestInfo found in the context
	W1221 19:47:26.519015       1 handler_proxy.go:99] no RequestInfo found in the context
	E1221 19:47:26.519047       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1221 19:47:26.519064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1221 19:47:26.519098       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1221 19:47:26.520256       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1221 19:47:26.815125       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1221 19:47:37.876327       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 19:47:37.883736       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 19:47:37.895370       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1221 19:47:37.902021       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1221 19:48:09.066062       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41288: use of closed network connection
	E1221 19:48:09.204950       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41302: use of closed network connection
	
	
	==> kube-controller-manager [8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2] <==
	I1221 19:47:07.860708       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 19:47:07.860733       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 19:47:07.860760       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1221 19:47:07.860777       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1221 19:47:07.860811       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1221 19:47:07.860762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1221 19:47:07.860812       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1221 19:47:07.860817       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1221 19:47:07.860780       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1221 19:47:07.863450       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:47:07.866692       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:47:07.872938       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 19:47:07.872994       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 19:47:07.873017       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 19:47:07.873022       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 19:47:07.873026       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 19:47:07.878136       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-734405" podCIDRs=["10.244.0.0/24"]
	I1221 19:47:07.879102       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1221 19:47:10.258187       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1221 19:47:22.861476       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1221 19:47:37.869348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1221 19:47:37.869435       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1221 19:47:37.889053       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1221 19:47:37.970012       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 19:47:37.990189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe] <==
	I1221 19:47:09.017510       1 server_linux.go:53] "Using iptables proxy"
	I1221 19:47:09.160962       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 19:47:09.262546       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 19:47:09.262585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1221 19:47:09.262666       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 19:47:09.332533       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 19:47:09.332623       1 server_linux.go:132] "Using iptables Proxier"
	I1221 19:47:09.339695       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 19:47:09.345297       1 server.go:527] "Version info" version="v1.34.3"
	I1221 19:47:09.345430       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 19:47:09.346946       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 19:47:09.347415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 19:47:09.347454       1 config.go:200] "Starting service config controller"
	I1221 19:47:09.347461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 19:47:09.347478       1 config.go:106] "Starting endpoint slice config controller"
	I1221 19:47:09.347483       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 19:47:09.347075       1 config.go:309] "Starting node config controller"
	I1221 19:47:09.347501       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 19:47:09.347507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 19:47:09.448278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 19:47:09.448972       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 19:47:09.449006       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2] <==
	E1221 19:47:00.865580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 19:47:00.865610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 19:47:00.865644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 19:47:00.865678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 19:47:00.865714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 19:47:00.865749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1221 19:47:00.866020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 19:47:00.866020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 19:47:00.866271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1221 19:47:00.866418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1221 19:47:00.866441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 19:47:00.866464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1221 19:47:00.866557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 19:47:00.866575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 19:47:01.827291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 19:47:01.855333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1221 19:47:01.961904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 19:47:01.973084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 19:47:01.973216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 19:47:01.981471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 19:47:02.007611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 19:47:02.027588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 19:47:02.075917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 19:47:02.120519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1221 19:47:05.363031       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.313394    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591666c1-8b3b-4987-9e9e-e118298cc3c8-kube-api-access-zdvwd" (OuterVolumeSpecName: "kube-api-access-zdvwd") pod "591666c1-8b3b-4987-9e9e-e118298cc3c8" (UID: "591666c1-8b3b-4987-9e9e-e118298cc3c8"). InnerVolumeSpecName "kube-api-access-zdvwd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.411870    1290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdvwd\" (UniqueName: \"kubernetes.io/projected/591666c1-8b3b-4987-9e9e-e118298cc3c8-kube-api-access-zdvwd\") on node \"addons-734405\" DevicePath \"\""
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.411906    1290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zs2mz\" (UniqueName: \"kubernetes.io/projected/a33ca337-832f-478a-9df0-13982a6d27f6-kube-api-access-zs2mz\") on node \"addons-734405\" DevicePath \"\""
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.592154    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bdea63eea4ba5359df4050d2c1f3c57e653c06fecf5848e932c1fb35c7fd644"
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.597652    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267ef654d52967c78a722c5c526a605aed1cbe2f976b9e10fbb61695b2217952"
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.604852    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="yakd-dashboard/yakd-dashboard-6654c87f9b-lz7ml" podStartSLOduration=12.597037215 podStartE2EDuration="34.604830157s" podCreationTimestamp="2025-12-21 19:47:10 +0000 UTC" firstStartedPulling="2025-12-21 19:47:22.273021334 +0000 UTC m=+18.947390800" lastFinishedPulling="2025-12-21 19:47:44.280814262 +0000 UTC m=+40.955183742" observedRunningTime="2025-12-21 19:47:44.604348205 +0000 UTC m=+41.278717703" watchObservedRunningTime="2025-12-21 19:47:44.604830157 +0000 UTC m=+41.279199638"
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.715367    1290 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzt7v\" (UniqueName: \"kubernetes.io/projected/aa2cbf63-c5c4-4c0a-ae43-e1a728db228d-kube-api-access-pzt7v\") pod \"aa2cbf63-c5c4-4c0a-ae43-e1a728db228d\" (UID: \"aa2cbf63-c5c4-4c0a-ae43-e1a728db228d\") "
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.717521    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2cbf63-c5c4-4c0a-ae43-e1a728db228d-kube-api-access-pzt7v" (OuterVolumeSpecName: "kube-api-access-pzt7v") pod "aa2cbf63-c5c4-4c0a-ae43-e1a728db228d" (UID: "aa2cbf63-c5c4-4c0a-ae43-e1a728db228d"). InnerVolumeSpecName "kube-api-access-pzt7v". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 21 19:47:44 addons-734405 kubelet[1290]: I1221 19:47:44.815955    1290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzt7v\" (UniqueName: \"kubernetes.io/projected/aa2cbf63-c5c4-4c0a-ae43-e1a728db228d-kube-api-access-pzt7v\") on node \"addons-734405\" DevicePath \"\""
	Dec 21 19:47:45 addons-734405 kubelet[1290]: I1221 19:47:45.602414    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85101bc1cf84925253c43408f257faa61023b569e2da0da1fff8df6b159cdfa"
	Dec 21 19:47:46 addons-734405 kubelet[1290]: I1221 19:47:46.608466    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5xdvv" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:47:46 addons-734405 kubelet[1290]: I1221 19:47:46.629287    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-5xdvv" podStartSLOduration=2.047612051 podStartE2EDuration="25.629268986s" podCreationTimestamp="2025-12-21 19:47:21 +0000 UTC" firstStartedPulling="2025-12-21 19:47:22.319007127 +0000 UTC m=+18.993376590" lastFinishedPulling="2025-12-21 19:47:45.900664063 +0000 UTC m=+42.575033525" observedRunningTime="2025-12-21 19:47:46.62868973 +0000 UTC m=+43.303059211" watchObservedRunningTime="2025-12-21 19:47:46.629268986 +0000 UTC m=+43.303638467"
	Dec 21 19:47:47 addons-734405 kubelet[1290]: I1221 19:47:47.611629    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5xdvv" secret="" err="secret \"gcp-auth\" not found"
	Dec 21 19:47:48 addons-734405 kubelet[1290]: I1221 19:47:48.629912    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-lvc5c" podStartSLOduration=17.556167269 podStartE2EDuration="38.629894648s" podCreationTimestamp="2025-12-21 19:47:10 +0000 UTC" firstStartedPulling="2025-12-21 19:47:27.379399302 +0000 UTC m=+24.053768761" lastFinishedPulling="2025-12-21 19:47:48.453126678 +0000 UTC m=+45.127496140" observedRunningTime="2025-12-21 19:47:48.629573715 +0000 UTC m=+45.303943195" watchObservedRunningTime="2025-12-21 19:47:48.629894648 +0000 UTC m=+45.304264156"
	Dec 21 19:47:50 addons-734405 kubelet[1290]: I1221 19:47:50.639056    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-f5n74" podStartSLOduration=22.007339398 podStartE2EDuration="33.639036308s" podCreationTimestamp="2025-12-21 19:47:17 +0000 UTC" firstStartedPulling="2025-12-21 19:47:37.938747476 +0000 UTC m=+34.613116953" lastFinishedPulling="2025-12-21 19:47:49.570444383 +0000 UTC m=+46.244813863" observedRunningTime="2025-12-21 19:47:50.638605556 +0000 UTC m=+47.312975036" watchObservedRunningTime="2025-12-21 19:47:50.639036308 +0000 UTC m=+47.313405793"
	Dec 21 19:47:51 addons-734405 kubelet[1290]: I1221 19:47:51.440004    1290 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 21 19:47:51 addons-734405 kubelet[1290]: I1221 19:47:51.440054    1290 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 21 19:47:53 addons-734405 kubelet[1290]: I1221 19:47:53.658465    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-9tblq" podStartSLOduration=1.5639498 podStartE2EDuration="32.658444877s" podCreationTimestamp="2025-12-21 19:47:21 +0000 UTC" firstStartedPulling="2025-12-21 19:47:22.261968336 +0000 UTC m=+18.936337808" lastFinishedPulling="2025-12-21 19:47:53.356463414 +0000 UTC m=+50.030832885" observedRunningTime="2025-12-21 19:47:53.657630903 +0000 UTC m=+50.332000383" watchObservedRunningTime="2025-12-21 19:47:53.658444877 +0000 UTC m=+50.332814357"
	Dec 21 19:47:53 addons-734405 kubelet[1290]: E1221 19:47:53.684636    1290 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 21 19:47:53 addons-734405 kubelet[1290]: E1221 19:47:53.684732    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45150a37-5dac-4f62-a0c4-4044a717c870-gcr-creds podName:45150a37-5dac-4f62-a0c4-4044a717c870 nodeName:}" failed. No retries permitted until 2025-12-21 19:48:25.684711115 +0000 UTC m=+82.359080577 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/45150a37-5dac-4f62-a0c4-4044a717c870-gcr-creds") pod "registry-creds-764b6fb674-8smmr" (UID: "45150a37-5dac-4f62-a0c4-4044a717c870") : secret "registry-creds-gcr" not found
	Dec 21 19:47:58 addons-734405 kubelet[1290]: I1221 19:47:58.677304    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-dmwnv" podStartSLOduration=44.920591448 podStartE2EDuration="48.677285517s" podCreationTimestamp="2025-12-21 19:47:10 +0000 UTC" firstStartedPulling="2025-12-21 19:47:53.991451478 +0000 UTC m=+50.665820937" lastFinishedPulling="2025-12-21 19:47:57.748145548 +0000 UTC m=+54.422515006" observedRunningTime="2025-12-21 19:47:58.677068032 +0000 UTC m=+55.351437512" watchObservedRunningTime="2025-12-21 19:47:58.677285517 +0000 UTC m=+55.351655000"
	Dec 21 19:48:01 addons-734405 kubelet[1290]: I1221 19:48:01.044378    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a5a9677e-ccdd-4fb3-ad46-086786f62164-gcp-creds\") pod \"busybox\" (UID: \"a5a9677e-ccdd-4fb3-ad46-086786f62164\") " pod="default/busybox"
	Dec 21 19:48:01 addons-734405 kubelet[1290]: I1221 19:48:01.044442    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9sk\" (UniqueName: \"kubernetes.io/projected/a5a9677e-ccdd-4fb3-ad46-086786f62164-kube-api-access-hw9sk\") pod \"busybox\" (UID: \"a5a9677e-ccdd-4fb3-ad46-086786f62164\") " pod="default/busybox"
	Dec 21 19:48:07 addons-734405 kubelet[1290]: I1221 19:48:07.023919    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=6.449940149 podStartE2EDuration="7.023897289s" podCreationTimestamp="2025-12-21 19:48:00 +0000 UTC" firstStartedPulling="2025-12-21 19:48:01.253352795 +0000 UTC m=+57.927722256" lastFinishedPulling="2025-12-21 19:48:01.827309921 +0000 UTC m=+58.501679396" observedRunningTime="2025-12-21 19:48:02.698219137 +0000 UTC m=+59.372588617" watchObservedRunningTime="2025-12-21 19:48:07.023897289 +0000 UTC m=+63.698266768"
	Dec 21 19:48:07 addons-734405 kubelet[1290]: I1221 19:48:07.406678    1290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07495aec-161d-42df-919d-478817ecd9c8" path="/var/lib/kubelet/pods/07495aec-161d-42df-919d-478817ecd9c8/volumes"
	
	
	==> storage-provisioner [23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5] <==
	W1221 19:47:46.307102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:48.309747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:48.373202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:50.376532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:50.381047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:52.384127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:52.388546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:54.393832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:54.399472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:56.403308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:56.410156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:58.412525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:47:58.415973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:00.418285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:00.421654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:02.424554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:02.427758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:04.429928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:04.434824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:06.437713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:06.441975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:08.445057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:08.448520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:10.451671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 19:48:10.455208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-734405 -n addons-734405
helpers_test.go:270: (dbg) Run:  kubectl --context addons-734405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-lrb4r ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn registry-creds-764b6fb674-8smmr
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-734405 describe pod gcp-auth-certs-patch-lrb4r ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn registry-creds-764b6fb674-8smmr
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-734405 describe pod gcp-auth-certs-patch-lrb4r ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn registry-creds-764b6fb674-8smmr: exit status 1 (56.625295ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-lrb4r" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-r2l6g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gp4pn" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-8smmr" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-734405 describe pod gcp-auth-certs-patch-lrb4r ingress-nginx-admission-create-r2l6g ingress-nginx-admission-patch-gp4pn registry-creds-764b6fb674-8smmr: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable headlamp --alsologtostderr -v=1: exit status 11 (244.468385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:11.671617   23553 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:11.671916   23553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:11.671927   23553 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:11.671931   23553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:11.672155   23553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:11.672429   23553 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:11.672741   23553 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:11.672760   23553 addons.go:622] checking whether the cluster is paused
	I1221 19:48:11.672838   23553 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:11.672848   23553 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:11.673308   23553 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:11.692094   23553 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:11.692153   23553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:11.709283   23553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:11.805432   23553 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:11.805517   23553 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:11.832191   23553 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:11.832216   23553 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:11.832245   23553 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:11.832251   23553 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:11.832257   23553 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:11.832262   23553 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:11.832267   23553 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:11.832271   23553 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:11.832275   23553 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:11.832283   23553 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:11.832287   23553 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:11.832292   23553 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:11.832295   23553 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:11.832308   23553 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:11.832314   23553 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:11.832322   23553 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:11.832325   23553 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:11.832328   23553 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:11.832330   23553 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:11.832333   23553 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:11.832336   23553 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:11.832338   23553 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:11.832341   23553 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:11.832348   23553 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:11.832350   23553 cri.go:96] found id: ""
	I1221 19:48:11.832386   23553 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:11.845817   23553 out.go:203] 
	W1221 19:48:11.847029   23553 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:11.847056   23553 out.go:285] * 
	* 
	W1221 19:48:11.850008   23553 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:11.851477   23553 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-85df47b6f4-ltblw" [715c9e5d-d18a-4f94-b439-b3f0c8f7b7e3] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003440838s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (239.851859ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:20.760784   24122 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:20.761098   24122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:20.761113   24122 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:20.761120   24122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:20.761339   24122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:20.761622   24122 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:20.761928   24122 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:20.761949   24122 addons.go:622] checking whether the cluster is paused
	I1221 19:48:20.762039   24122 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:20.762055   24122 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:20.762460   24122 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:20.780626   24122 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:20.780668   24122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:20.797389   24122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:20.894551   24122 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:20.894645   24122 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:20.922695   24122 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:20.922720   24122 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:20.922727   24122 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:20.922731   24122 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:20.922736   24122 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:20.922744   24122 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:20.922749   24122 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:20.922753   24122 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:20.922757   24122 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:20.922765   24122 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:20.922770   24122 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:20.922775   24122 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:20.922780   24122 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:20.922787   24122 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:20.922792   24122 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:20.922805   24122 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:20.922808   24122 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:20.922813   24122 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:20.922819   24122 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:20.922821   24122 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:20.922824   24122 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:20.922827   24122 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:20.922830   24122 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:20.922834   24122 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:20.922837   24122 cri.go:96] found id: ""
	I1221 19:48:20.922883   24122 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:20.936215   24122 out.go:203] 
	W1221 19:48:20.937580   24122 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:20.937595   24122 out.go:285] * 
	* 
	W1221 19:48:20.940542   24122 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:20.941654   24122 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-734405 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-734405 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-734405 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [ae176741-dfa0-40af-bc40-bdc7db5ff657] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [ae176741-dfa0-40af-bc40-bdc7db5ff657] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [ae176741-dfa0-40af-bc40-bdc7db5ff657] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003403292s
addons_test.go:969: (dbg) Run:  kubectl --context addons-734405 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 ssh "cat /opt/local-path-provisioner/pvc-c9a8c150-674a-4b88-96eb-4f04de96494b_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-734405 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-734405 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (241.293509ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:24.641065   24654 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:24.641475   24654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:24.641486   24654 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:24.641491   24654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:24.641673   24654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:24.641924   24654 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:24.642250   24654 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:24.642269   24654 addons.go:622] checking whether the cluster is paused
	I1221 19:48:24.642350   24654 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:24.642361   24654 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:24.642689   24654 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:24.660130   24654 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:24.660182   24654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:24.676374   24654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:24.771387   24654 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:24.771453   24654 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:24.800650   24654 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:24.800676   24654 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:24.800682   24654 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:24.800686   24654 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:24.800689   24654 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:24.800693   24654 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:24.800696   24654 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:24.800699   24654 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:24.800702   24654 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:24.800717   24654 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:24.800724   24654 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:24.800726   24654 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:24.800729   24654 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:24.800731   24654 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:24.800734   24654 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:24.800738   24654 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:24.800741   24654 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:24.800744   24654 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:24.800746   24654 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:24.800749   24654 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:24.800752   24654 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:24.800755   24654 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:24.800761   24654 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:24.800764   24654 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:24.800767   24654 cri.go:96] found id: ""
	I1221 19:48:24.800805   24654 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:24.814751   24654 out.go:203] 
	W1221 19:48:24.816289   24654 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:24.816315   24654 out.go:285] * 
	* 
	W1221 19:48:24.819292   24654 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:24.823386   24654 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-jlq7q" [5c7ed01e-0fe4-4827-9dae-a9bcd97f548e] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002763875s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (247.579312ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:15.510099   23711 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:15.510417   23711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:15.510427   23711 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:15.510432   23711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:15.510678   23711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:15.510940   23711 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:15.511248   23711 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:15.511271   23711 addons.go:622] checking whether the cluster is paused
	I1221 19:48:15.511356   23711 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:15.511369   23711 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:15.511761   23711 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:15.531064   23711 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:15.531119   23711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:15.550351   23711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:15.647617   23711 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:15.647697   23711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:15.674563   23711 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:15.674599   23711 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:15.674605   23711 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:15.674610   23711 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:15.674612   23711 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:15.674620   23711 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:15.674623   23711 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:15.674626   23711 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:15.674629   23711 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:15.674649   23711 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:15.674659   23711 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:15.674664   23711 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:15.674672   23711 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:15.674677   23711 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:15.674684   23711 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:15.674700   23711 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:15.674706   23711 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:15.674711   23711 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:15.674714   23711 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:15.674717   23711 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:15.674719   23711 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:15.674722   23711 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:15.674725   23711 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:15.674728   23711 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:15.674731   23711 cri.go:96] found id: ""
	I1221 19:48:15.674776   23711 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:15.688220   23711 out.go:203] 
	W1221 19:48:15.689666   23711 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:15.689682   23711 out.go:285] * 
	* 
	W1221 19:48:15.692571   23711 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:15.693726   23711 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-lz7ml" [6c799cdc-d99a-48c6-b26b-d46eddd7c6e7] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00325299s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable yakd --alsologtostderr -v=1: exit status 11 (282.413914ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:29.310195   25695 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:29.310558   25695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:29.310572   25695 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:29.310577   25695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:29.310870   25695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:29.311218   25695 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:29.311687   25695 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:29.311715   25695 addons.go:622] checking whether the cluster is paused
	I1221 19:48:29.311858   25695 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:29.311889   25695 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:29.312431   25695 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:29.334595   25695 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:29.334670   25695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:29.357497   25695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:29.465551   25695 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:29.465651   25695 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:29.496045   25695 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:29.496096   25695 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:29.496103   25695 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:29.496106   25695 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:29.496109   25695 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:29.496112   25695 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:29.496115   25695 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:29.496117   25695 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:29.496120   25695 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:29.496125   25695 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:29.496128   25695 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:29.496131   25695 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:29.496134   25695 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:29.496137   25695 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:29.496140   25695 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:29.496149   25695 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:29.496153   25695 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:29.496158   25695 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:29.496161   25695 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:29.496163   25695 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:29.496167   25695 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:29.496175   25695 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:29.496177   25695 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:29.496181   25695 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:29.496184   25695 cri.go:96] found id: ""
	I1221 19:48:29.496220   25695 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:29.512010   25695 out.go:203] 
	W1221 19:48:29.513372   25695 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:29.513400   25695 out.go:285] * 
	* 
	W1221 19:48:29.517799   25695 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:29.519207   25695 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.29s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-s628b" [b4f9a790-2ff8-43f6-8199-0b06654607c7] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.002497443s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-734405 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-734405 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (236.425115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:48:27.003963   25122 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:48:27.004120   25122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:27.004129   25122 out.go:374] Setting ErrFile to fd 2...
	I1221 19:48:27.004134   25122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:48:27.004497   25122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:48:27.004830   25122 mustload.go:66] Loading cluster: addons-734405
	I1221 19:48:27.005171   25122 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:27.005195   25122 addons.go:622] checking whether the cluster is paused
	I1221 19:48:27.005309   25122 config.go:182] Loaded profile config "addons-734405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:48:27.005333   25122 host.go:66] Checking if "addons-734405" exists ...
	I1221 19:48:27.005719   25122 cli_runner.go:164] Run: docker container inspect addons-734405 --format={{.State.Status}}
	I1221 19:48:27.025133   25122 ssh_runner.go:195] Run: systemctl --version
	I1221 19:48:27.025187   25122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-734405
	I1221 19:48:27.041374   25122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/addons-734405/id_rsa Username:docker}
	I1221 19:48:27.136172   25122 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 19:48:27.136308   25122 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 19:48:27.163030   25122 cri.go:96] found id: "8193c5ae3e9a003ea1877e6a0b1c9b5fbc312683ba4a116887fed13bf2683154"
	I1221 19:48:27.163063   25122 cri.go:96] found id: "676b24cbeac1b3bb3b86a993f9f8dd5df1abe81d9b6bb0232dbbad15d638b823"
	I1221 19:48:27.163068   25122 cri.go:96] found id: "ae4f670583b4b182293b717ad7ad125a17b2456028c7dbd27b76ca37adc65536"
	I1221 19:48:27.163071   25122 cri.go:96] found id: "4da1a1c1615a16ec46742d810942c0f0594350dd79db4ef1a09a7fca0ff26c86"
	I1221 19:48:27.163074   25122 cri.go:96] found id: "83cd8b34dd2bcf23067b6306ffa38b91762349398d82e53aeac5ad5488489a1b"
	I1221 19:48:27.163077   25122 cri.go:96] found id: "9154e33c67350cf8648438028cc80e561774d53d29a87d24f219526c6883c0de"
	I1221 19:48:27.163080   25122 cri.go:96] found id: "fc4218afd9e593172e6df278dd90d2813b3dd655711c7e4e9a3276520ffdc17f"
	I1221 19:48:27.163083   25122 cri.go:96] found id: "d37800c5570f8c0ce24a2f302a2f98619dc14995b580b01c52073ed81433f4d1"
	I1221 19:48:27.163085   25122 cri.go:96] found id: "5acc717deb7f9b31a866277363c1c6a80c40175846b1ba7b4d73f62a27f4d341"
	I1221 19:48:27.163102   25122 cri.go:96] found id: "737d21aac5c57624b259ae871fe0e07be1dba4597468dc1196d5dc29495fed27"
	I1221 19:48:27.163106   25122 cri.go:96] found id: "749cd4daccd503fe99087482965154e76ec4fa71f8d8a14ebd9c6bf86716b364"
	I1221 19:48:27.163111   25122 cri.go:96] found id: "c99f35ca87dcf1f2528a7498f41c23004d48c08c287d25461a9d5dd797dd6714"
	I1221 19:48:27.163115   25122 cri.go:96] found id: "33aa662cb1f0bef79dfcf3a92ec273ad7f25a5688c8db5b7ae200f774e74e3ec"
	I1221 19:48:27.163120   25122 cri.go:96] found id: "d7348a5e060fd9cd4734c2fe04f1e67e369d1cc6a16a46d37176360e175e3a8d"
	I1221 19:48:27.163124   25122 cri.go:96] found id: "abf23e714a098de79baf7577846b9b62cf3bec2fdeddb459b39f9d5fd50f42f9"
	I1221 19:48:27.163142   25122 cri.go:96] found id: "54e47bcdd2cec6418edf8ef09c37c6f3a069f57efa60bbdb5f6834b815a29df8"
	I1221 19:48:27.163153   25122 cri.go:96] found id: "d6093c1a7f9f67fb8bfd2e5d93f01d1e528445bcdd00173451f94703fac12de2"
	I1221 19:48:27.163160   25122 cri.go:96] found id: "23a6a681dd961e50f6b6acf650cb8306382eb660a121fceb3ac6f154f793d4c5"
	I1221 19:48:27.163167   25122 cri.go:96] found id: "e631c821d8606270afc3ef440632c3bf63a9a26edd34ad33488adc424163d91f"
	I1221 19:48:27.163172   25122 cri.go:96] found id: "026bbd1e79a4ddba49b8952a036792a0036397862b49a41384936cd1e5c2ecbe"
	I1221 19:48:27.163182   25122 cri.go:96] found id: "e8e92c3f6bb0cb69dfd26915bedc288fcd28f1bb7f04699968c8d937c9b8ffe2"
	I1221 19:48:27.163189   25122 cri.go:96] found id: "8989e50092359c1c45eabe98abb9db0207b77c88c42ad5e80391fce84bead3d2"
	I1221 19:48:27.163191   25122 cri.go:96] found id: "5cbca605ea4a519bca82bf0a26a780d1044b917206dc07a8ddfab8ac714bfdce"
	I1221 19:48:27.163194   25122 cri.go:96] found id: "a790cf4635e7ce151f0cf556d1f34f624cb535ef575d0cc5782652e6d5ebaed8"
	I1221 19:48:27.163196   25122 cri.go:96] found id: ""
	I1221 19:48:27.163267   25122 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 19:48:27.176487   25122 out.go:203] 
	W1221 19:48:27.177698   25122 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:48:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 19:48:27.177722   25122 out.go:285] * 
	* 
	W1221 19:48:27.180729   25122 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 19:48:27.181927   25122 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-734405 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.31s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-783448 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-783448 --output=json --user=testUser: exit status 80 (2.305841523s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a40edeb1-4c9d-442a-9b56-ac7020c20789","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-783448 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"83805176-d95e-4d4d-894c-a714b431a0c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-21T20:06:10Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"10b82faf-c706-41ff-bb3e-8857ff3a7287","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-783448 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.31s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.86s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-783448 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-783448 --output=json --user=testUser: exit status 80 (1.857887474s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"879ebc96-a32d-45ca-88b5-a4d5f4dd2955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-783448 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b1985aea-ccdc-48f2-93be-90cfe29dd971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-21T20:06:12Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"1bc35372-fe1f-4c03-b096-90055f39cd63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-783448 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.86s)

                                                
                                    
x
+
TestPause/serial/Pause (5.24s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-592353 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-592353 --alsologtostderr -v=5: exit status 80 (1.692791971s)

                                                
                                                
-- stdout --
	* Pausing node pause-592353 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:21:57.787938  265033 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:21:57.788058  265033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:21:57.788067  265033 out.go:374] Setting ErrFile to fd 2...
	I1221 20:21:57.788072  265033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:21:57.788338  265033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:21:57.788603  265033 out.go:368] Setting JSON to false
	I1221 20:21:57.788622  265033 mustload.go:66] Loading cluster: pause-592353
	I1221 20:21:57.789024  265033 config.go:182] Loaded profile config "pause-592353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:21:57.789537  265033 cli_runner.go:164] Run: docker container inspect pause-592353 --format={{.State.Status}}
	I1221 20:21:57.813426  265033 host.go:66] Checking if "pause-592353" exists ...
	I1221 20:21:57.813761  265033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:21:57.881322  265033 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-21 20:21:57.86953294 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:21:57.882345  265033 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-592353 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification
:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1221 20:21:57.885410  265033 out.go:179] * Pausing node pause-592353 ... 
	I1221 20:21:57.886600  265033 host.go:66] Checking if "pause-592353" exists ...
	I1221 20:21:57.886868  265033 ssh_runner.go:195] Run: systemctl --version
	I1221 20:21:57.886934  265033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:57.908560  265033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:58.010593  265033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:21:58.023901  265033 pause.go:52] kubelet running: true
	I1221 20:21:58.023988  265033 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:21:58.196564  265033 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:21:58.196695  265033 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:21:58.276711  265033 cri.go:96] found id: "d7ddab942fcf30350719c79fe4e4da1c0344baa599e6f163ace8f40cf51716a7"
	I1221 20:21:58.276739  265033 cri.go:96] found id: "8d5f874a6621042cba99fbce56842b3962ec673f9bbedcd6afd28d968aedbc30"
	I1221 20:21:58.276745  265033 cri.go:96] found id: "42a6f973de3c4cd2665eefb628f1948c23aca56e3f9d1687e6a7f96eb4cbd6b8"
	I1221 20:21:58.276750  265033 cri.go:96] found id: "5231ce47f2d8f12d2622ea04f309e487bd672aaae1b69080127c64beafdec65d"
	I1221 20:21:58.276754  265033 cri.go:96] found id: "1a16fa514a1ef021231144a2510542320893d892df6c756403ccd3f12a41fb0c"
	I1221 20:21:58.276758  265033 cri.go:96] found id: "c3d9d9135faab4bd815eb6556f77257cab04249a3949c66ff3a7c8a7158a602c"
	I1221 20:21:58.276763  265033 cri.go:96] found id: "201d5aae363cad0f1dc034c2f10bf6a04bf4e952b700716cb2f85ef85d99e133"
	I1221 20:21:58.276768  265033 cri.go:96] found id: ""
	I1221 20:21:58.276814  265033 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:21:58.290910  265033 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:21:58Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:21:58.631478  265033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:21:58.644465  265033 pause.go:52] kubelet running: false
	I1221 20:21:58.644518  265033 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:21:58.802065  265033 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:21:58.802195  265033 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:21:58.869251  265033 cri.go:96] found id: "d7ddab942fcf30350719c79fe4e4da1c0344baa599e6f163ace8f40cf51716a7"
	I1221 20:21:58.869279  265033 cri.go:96] found id: "8d5f874a6621042cba99fbce56842b3962ec673f9bbedcd6afd28d968aedbc30"
	I1221 20:21:58.869285  265033 cri.go:96] found id: "42a6f973de3c4cd2665eefb628f1948c23aca56e3f9d1687e6a7f96eb4cbd6b8"
	I1221 20:21:58.869290  265033 cri.go:96] found id: "5231ce47f2d8f12d2622ea04f309e487bd672aaae1b69080127c64beafdec65d"
	I1221 20:21:58.869294  265033 cri.go:96] found id: "1a16fa514a1ef021231144a2510542320893d892df6c756403ccd3f12a41fb0c"
	I1221 20:21:58.869299  265033 cri.go:96] found id: "c3d9d9135faab4bd815eb6556f77257cab04249a3949c66ff3a7c8a7158a602c"
	I1221 20:21:58.869304  265033 cri.go:96] found id: "201d5aae363cad0f1dc034c2f10bf6a04bf4e952b700716cb2f85ef85d99e133"
	I1221 20:21:58.869308  265033 cri.go:96] found id: ""
	I1221 20:21:58.869370  265033 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:21:59.185362  265033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:21:59.198994  265033 pause.go:52] kubelet running: false
	I1221 20:21:59.199054  265033 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:21:59.314030  265033 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:21:59.314106  265033 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:21:59.384512  265033 cri.go:96] found id: "d7ddab942fcf30350719c79fe4e4da1c0344baa599e6f163ace8f40cf51716a7"
	I1221 20:21:59.384537  265033 cri.go:96] found id: "8d5f874a6621042cba99fbce56842b3962ec673f9bbedcd6afd28d968aedbc30"
	I1221 20:21:59.384543  265033 cri.go:96] found id: "42a6f973de3c4cd2665eefb628f1948c23aca56e3f9d1687e6a7f96eb4cbd6b8"
	I1221 20:21:59.384549  265033 cri.go:96] found id: "5231ce47f2d8f12d2622ea04f309e487bd672aaae1b69080127c64beafdec65d"
	I1221 20:21:59.384554  265033 cri.go:96] found id: "1a16fa514a1ef021231144a2510542320893d892df6c756403ccd3f12a41fb0c"
	I1221 20:21:59.384558  265033 cri.go:96] found id: "c3d9d9135faab4bd815eb6556f77257cab04249a3949c66ff3a7c8a7158a602c"
	I1221 20:21:59.384563  265033 cri.go:96] found id: "201d5aae363cad0f1dc034c2f10bf6a04bf4e952b700716cb2f85ef85d99e133"
	I1221 20:21:59.384567  265033 cri.go:96] found id: ""
	I1221 20:21:59.384611  265033 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:21:59.399426  265033 out.go:203] 
	W1221 20:21:59.400793  265033 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:21:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:21:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 20:21:59.400812  265033 out.go:285] * 
	* 
	W1221 20:21:59.404942  265033 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 20:21:59.406059  265033 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-592353 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-592353
helpers_test.go:244: (dbg) docker inspect pause-592353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517",
	        "Created": "2025-12-21T20:21:14.812580296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255190,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:21:14.850690371Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/hosts",
	        "LogPath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517-json.log",
	        "Name": "/pause-592353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-592353:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-592353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517",
	                "LowerDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-592353",
	                "Source": "/var/lib/docker/volumes/pause-592353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-592353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-592353",
	                "name.minikube.sigs.k8s.io": "pause-592353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5bcf56d7821acdc2d87a533566f5d1428162b773aa45bcc640b7f233246ebc27",
	            "SandboxKey": "/var/run/docker/netns/5bcf56d7821a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-592353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9fa2c6c36158ec4453936581b63a2d728a2a1fa9f3e30e177dbd4ba7230cda13",
	                    "EndpointID": "b1c26713c5cb69eec074eeec788b8e0d83e7770b7cb20cd758d277434a215979",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ea:72:eb:b5:4e:77",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-592353",
	                        "e5171eb31530"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-592353 -n pause-592353
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-592353 -n pause-592353: exit status 2 (334.061309ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-592353 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-149976 sudo crio config                                                                                                                                                                                         │ cilium-149976             │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │                     │
	│ delete  │ -p cilium-149976                                                                                                                                                                                                          │ cilium-149976             │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ start   │ -p force-systemd-env-558127 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-558127  │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ start   │ -p running-upgrade-707221 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-707221    │ jenkins │ v1.35.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ delete  │ -p force-systemd-env-558127                                                                                                                                                                                               │ force-systemd-env-558127  │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ start   │ -p running-upgrade-707221 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-707221    │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │                     │
	│ start   │ -p test-preload-115092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                              │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:19 UTC │
	│ start   │ -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-291108 │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │                     │
	│ start   │ -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                             │ kubernetes-upgrade-291108 │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ image   │ test-preload-115092 image pull public.ecr.aws/docker/library/busybox:latest                                                                                                                                               │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ stop    │ -p test-preload-115092                                                                                                                                                                                                    │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ delete  │ -p kubernetes-upgrade-291108                                                                                                                                                                                              │ kubernetes-upgrade-291108 │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ start   │ -p cert-expiration-026803 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-026803    │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:20 UTC │
	│ start   │ -p test-preload-115092 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                        │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:20 UTC │
	│ image   │ test-preload-115092 image list                                                                                                                                                                                            │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:20 UTC │ 21 Dec 25 20:20 UTC │
	│ delete  │ -p test-preload-115092                                                                                                                                                                                                    │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:20 UTC │ 21 Dec 25 20:20 UTC │
	│ start   │ -p cert-options-746684 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:20 UTC │ 21 Dec 25 20:21 UTC │
	│ ssh     │ cert-options-746684 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ ssh     │ -p cert-options-746684 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ delete  │ -p cert-options-746684                                                                                                                                                                                                    │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ start   │ -p pause-592353 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-592353              │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ delete  │ -p stopped-upgrade-611850                                                                                                                                                                                                 │ stopped-upgrade-611850    │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ start   │ -p auto-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-149976               │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │                     │
	│ start   │ -p pause-592353 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-592353              │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ pause   │ -p pause-592353 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-592353              │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:21:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:21:52.053544  263272 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:21:52.053771  263272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:21:52.053779  263272 out.go:374] Setting ErrFile to fd 2...
	I1221 20:21:52.053783  263272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:21:52.053971  263272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:21:52.054423  263272 out.go:368] Setting JSON to false
	I1221 20:21:52.055507  263272 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3861,"bootTime":1766344651,"procs":387,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:21:52.055572  263272 start.go:143] virtualization: kvm guest
	I1221 20:21:52.057812  263272 out.go:179] * [pause-592353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:21:52.059529  263272 notify.go:221] Checking for updates...
	I1221 20:21:52.059535  263272 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:21:52.060915  263272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:21:52.062079  263272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:21:52.063718  263272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:21:52.064932  263272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:21:52.066149  263272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:21:52.067924  263272 config.go:182] Loaded profile config "pause-592353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:21:52.068656  263272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:21:52.094017  263272 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:21:52.094161  263272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:21:52.157190  263272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-21 20:21:52.146866188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:21:52.157314  263272 docker.go:319] overlay module found
	I1221 20:21:52.160156  263272 out.go:179] * Using the docker driver based on existing profile
	I1221 20:21:52.161376  263272 start.go:309] selected driver: docker
	I1221 20:21:52.161394  263272 start.go:928] validating driver "docker" against &{Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:52.161519  263272 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:21:52.161600  263272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:21:52.228714  263272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-21 20:21:52.217806539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:21:52.229551  263272 cni.go:84] Creating CNI manager for ""
	I1221 20:21:52.229643  263272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:21:52.229711  263272 start.go:353] cluster config:
	{Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:52.233555  263272 out.go:179] * Starting "pause-592353" primary control-plane node in "pause-592353" cluster
	I1221 20:21:52.234609  263272 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:21:52.236120  263272 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:21:52.237183  263272 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:21:52.237248  263272 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 20:21:52.237269  263272 cache.go:65] Caching tarball of preloaded images
	I1221 20:21:52.237278  263272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:21:52.237356  263272 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:21:52.237370  263272 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 20:21:52.237532  263272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/config.json ...
	I1221 20:21:52.260880  263272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:21:52.260901  263272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:21:52.260922  263272 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:21:52.260955  263272 start.go:360] acquireMachinesLock for pause-592353: {Name:mk82f022bb0c28df78da9902d0a1772d3ef40883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:21:52.261016  263272 start.go:364] duration metric: took 40.452µs to acquireMachinesLock for "pause-592353"
	I1221 20:21:52.261031  263272 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:21:52.261037  263272 fix.go:54] fixHost starting: 
	I1221 20:21:52.261342  263272 cli_runner.go:164] Run: docker container inspect pause-592353 --format={{.State.Status}}
	I1221 20:21:52.282137  263272 fix.go:112] recreateIfNeeded on pause-592353: state=Running err=<nil>
	W1221 20:21:52.282168  263272 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 20:21:51.174269  260566 cli_runner.go:164] Run: docker network inspect auto-149976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:21:51.190867  260566 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1221 20:21:51.194801  260566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:21:51.204533  260566 kubeadm.go:884] updating cluster {Name:auto-149976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-149976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:21:51.204653  260566 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:21:51.204707  260566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:51.235457  260566 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:51.235475  260566 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:21:51.235515  260566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:51.260081  260566 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:51.260100  260566 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:21:51.260108  260566 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.3 crio true true} ...
	I1221 20:21:51.260185  260566 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-149976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:auto-149976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:21:51.260269  260566 ssh_runner.go:195] Run: crio config
	I1221 20:21:51.303157  260566 cni.go:84] Creating CNI manager for ""
	I1221 20:21:51.303178  260566 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:21:51.303193  260566 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:21:51.303218  260566 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-149976 NodeName:auto-149976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:21:51.303359  260566 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-149976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:21:51.303417  260566 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:21:51.311269  260566 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:21:51.311323  260566 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:21:51.318634  260566 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1221 20:21:51.330800  260566 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:21:51.346534  260566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1221 20:21:51.358697  260566 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:21:51.362008  260566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:21:51.371512  260566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:51.450514  260566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:21:51.473582  260566 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976 for IP: 192.168.103.2
	I1221 20:21:51.473602  260566 certs.go:195] generating shared ca certs ...
	I1221 20:21:51.473623  260566 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.473757  260566 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:21:51.473795  260566 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:21:51.473804  260566 certs.go:257] generating profile certs ...
	I1221 20:21:51.473874  260566 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.key
	I1221 20:21:51.473889  260566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt with IP's: []
	I1221 20:21:51.617372  260566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt ...
	I1221 20:21:51.617405  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: {Name:mk32f1716e31081c3f1f92da82e77097218f4068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.617595  260566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.key ...
	I1221 20:21:51.617610  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.key: {Name:mkd18499cb9f20f33985edd153ec67d23828a67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.617708  260566 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c
	I1221 20:21:51.617726  260566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1221 20:21:51.680901  260566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c ...
	I1221 20:21:51.680930  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c: {Name:mk15e432ff9ff634bc3f2a4390091f10f87d3534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.681098  260566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c ...
	I1221 20:21:51.681118  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c: {Name:mk207e22acff0c833c7904639ed2437c73ed6a32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.681248  260566 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt
	I1221 20:21:51.681349  260566 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key
	I1221 20:21:51.681417  260566 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key
	I1221 20:21:51.681435  260566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt with IP's: []
	I1221 20:21:51.829188  260566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt ...
	I1221 20:21:51.829220  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt: {Name:mk026f66d1c1ef478db7d7b0f10f18c53c53b91c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.829420  260566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key ...
	I1221 20:21:51.829436  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key: {Name:mk66d9f7850635be7883902a76da2ab65c9d9490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.829635  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:21:51.829675  260566 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:21:51.829691  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:21:51.829718  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:21:51.829746  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:21:51.829774  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:21:51.829822  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:21:51.830424  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:21:51.848169  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:21:51.865173  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:21:51.881617  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:21:51.897948  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1221 20:21:51.914497  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 20:21:51.931039  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:21:51.950531  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1221 20:21:51.967984  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:21:51.987234  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:21:52.006577  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:21:52.023317  260566 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:21:52.036360  260566 ssh_runner.go:195] Run: openssl version
	I1221 20:21:52.043100  260566 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.051316  260566 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:21:52.059029  260566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.062737  260566 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.062796  260566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.105105  260566 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:21:52.115709  260566 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127112.pem /etc/ssl/certs/3ec20f2e.0
	I1221 20:21:52.127207  260566 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.135659  260566 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:21:52.145260  260566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.149406  260566 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.149470  260566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.197818  260566 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:21:52.208440  260566 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 20:21:52.218903  260566 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.228438  260566 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:21:52.237622  260566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.242113  260566 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.242189  260566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.286846  260566 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:21:52.294926  260566 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12711.pem /etc/ssl/certs/51391683.0
	I1221 20:21:52.303593  260566 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:21:52.307470  260566 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 20:21:52.307530  260566 kubeadm.go:401] StartCluster: {Name:auto-149976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-149976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:52.307610  260566 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:21:52.307667  260566 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:21:52.336768  260566 cri.go:96] found id: ""
	I1221 20:21:52.336834  260566 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:21:52.345879  260566 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 20:21:52.354267  260566 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 20:21:52.354322  260566 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 20:21:52.362854  260566 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 20:21:52.362874  260566 kubeadm.go:158] found existing configuration files:
	
	I1221 20:21:52.362919  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 20:21:52.370523  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 20:21:52.370582  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 20:21:52.378219  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 20:21:52.386946  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 20:21:52.386992  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 20:21:52.394863  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 20:21:52.402521  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 20:21:52.402564  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 20:21:52.409795  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 20:21:52.417534  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 20:21:52.417588  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 20:21:52.425990  260566 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 20:21:52.470845  260566 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1221 20:21:52.470929  260566 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 20:21:52.495437  260566 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1221 20:21:52.495528  260566 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1221 20:21:52.495621  260566 kubeadm.go:319] OS: Linux
	I1221 20:21:52.495688  260566 kubeadm.go:319] CGROUPS_CPU: enabled
	I1221 20:21:52.495766  260566 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1221 20:21:52.495842  260566 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1221 20:21:52.495913  260566 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1221 20:21:52.495986  260566 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1221 20:21:52.496079  260566 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1221 20:21:52.496157  260566 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1221 20:21:52.496265  260566 kubeadm.go:319] CGROUPS_IO: enabled
	I1221 20:21:52.564313  260566 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 20:21:52.564487  260566 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 20:21:52.564626  260566 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 20:21:52.571671  260566 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 20:21:48.625316  229762 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:21:48.625731  229762 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1221 20:21:48.625784  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 20:21:48.625835  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1221 20:21:48.661558  229762 cri.go:96] found id: "834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:48.661580  229762 cri.go:96] found id: ""
	I1221 20:21:48.661587  229762 logs.go:282] 1 containers: [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7]
	I1221 20:21:48.661629  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.665245  229762 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 20:21:48.665314  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1221 20:21:48.707346  229762 cri.go:96] found id: "1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:48.707373  229762 cri.go:96] found id: ""
	I1221 20:21:48.707385  229762 logs.go:282] 1 containers: [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234]
	I1221 20:21:48.707455  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.711789  229762 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 20:21:48.711849  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1221 20:21:48.747380  229762 cri.go:96] found id: ""
	I1221 20:21:48.747408  229762 logs.go:282] 0 containers: []
	W1221 20:21:48.747419  229762 logs.go:284] No container was found matching "coredns"
	I1221 20:21:48.747428  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 20:21:48.747489  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1221 20:21:48.781029  229762 cri.go:96] found id: "0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:48.781051  229762 cri.go:96] found id: "03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:48.781054  229762 cri.go:96] found id: ""
	I1221 20:21:48.781061  229762 logs.go:282] 2 containers: [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226]
	I1221 20:21:48.781117  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.784754  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.788703  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 20:21:48.788756  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1221 20:21:48.823779  229762 cri.go:96] found id: "67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:48.823803  229762 cri.go:96] found id: ""
	I1221 20:21:48.823812  229762 logs.go:282] 1 containers: [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8]
	I1221 20:21:48.823865  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.827905  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 20:21:48.827969  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1221 20:21:48.862913  229762 cri.go:96] found id: "6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:48.862939  229762 cri.go:96] found id: ""
	I1221 20:21:48.862953  229762 logs.go:282] 1 containers: [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae]
	I1221 20:21:48.863013  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.867339  229762 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 20:21:48.867402  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1221 20:21:48.904013  229762 cri.go:96] found id: "c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:48.904029  229762 cri.go:96] found id: ""
	I1221 20:21:48.904036  229762 logs.go:282] 1 containers: [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b]
	I1221 20:21:48.904079  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.907634  229762 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1221 20:21:48.907683  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1221 20:21:48.942728  229762 cri.go:96] found id: "a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:48.942748  229762 cri.go:96] found id: ""
	I1221 20:21:48.942755  229762 logs.go:282] 1 containers: [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77]
	I1221 20:21:48.942808  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.946866  229762 logs.go:123] Gathering logs for dmesg ...
	I1221 20:21:48.946889  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 20:21:48.962679  229762 logs.go:123] Gathering logs for describe nodes ...
	I1221 20:21:48.962707  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1221 20:21:49.024967  229762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1221 20:21:49.024990  229762 logs.go:123] Gathering logs for etcd [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234] ...
	I1221 20:21:49.025006  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:49.072249  229762 logs.go:123] Gathering logs for kube-scheduler [03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226] ...
	I1221 20:21:49.072291  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:49.120367  229762 logs.go:123] Gathering logs for kube-proxy [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8] ...
	I1221 20:21:49.120400  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:49.164972  229762 logs.go:123] Gathering logs for kube-controller-manager [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae] ...
	I1221 20:21:49.165008  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:49.201081  229762 logs.go:123] Gathering logs for kindnet [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b] ...
	I1221 20:21:49.201107  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:49.239157  229762 logs.go:123] Gathering logs for kube-apiserver [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7] ...
	I1221 20:21:49.239192  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:49.279543  229762 logs.go:123] Gathering logs for kube-scheduler [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3] ...
	I1221 20:21:49.279572  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:49.345265  229762 logs.go:123] Gathering logs for storage-provisioner [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77] ...
	I1221 20:21:49.345297  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:49.378951  229762 logs.go:123] Gathering logs for CRI-O ...
	I1221 20:21:49.378983  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 20:21:49.437873  229762 logs.go:123] Gathering logs for container status ...
	I1221 20:21:49.437905  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 20:21:49.476896  229762 logs.go:123] Gathering logs for kubelet ...
	I1221 20:21:49.476928  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 20:21:52.072314  229762 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:21:52.072718  229762 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1221 20:21:52.072764  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 20:21:52.072815  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1221 20:21:52.117111  229762 cri.go:96] found id: "834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:52.117133  229762 cri.go:96] found id: ""
	I1221 20:21:52.117143  229762 logs.go:282] 1 containers: [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7]
	I1221 20:21:52.117194  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.121423  229762 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 20:21:52.121496  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1221 20:21:52.163120  229762 cri.go:96] found id: "1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:52.163136  229762 cri.go:96] found id: ""
	I1221 20:21:52.163143  229762 logs.go:282] 1 containers: [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234]
	I1221 20:21:52.163189  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.167010  229762 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 20:21:52.167069  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1221 20:21:52.217969  229762 cri.go:96] found id: ""
	I1221 20:21:52.218003  229762 logs.go:282] 0 containers: []
	W1221 20:21:52.218012  229762 logs.go:284] No container was found matching "coredns"
	I1221 20:21:52.218020  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 20:21:52.218076  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1221 20:21:52.260286  229762 cri.go:96] found id: "0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:52.260308  229762 cri.go:96] found id: "03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:52.260314  229762 cri.go:96] found id: ""
	I1221 20:21:52.260323  229762 logs.go:282] 2 containers: [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226]
	I1221 20:21:52.260375  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.264262  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.267824  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 20:21:52.267883  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1221 20:21:52.304090  229762 cri.go:96] found id: "67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:52.304107  229762 cri.go:96] found id: ""
	I1221 20:21:52.304114  229762 logs.go:282] 1 containers: [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8]
	I1221 20:21:52.304148  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.307944  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 20:21:52.308004  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1221 20:21:52.345455  229762 cri.go:96] found id: "6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:52.345475  229762 cri.go:96] found id: ""
	I1221 20:21:52.345484  229762 logs.go:282] 1 containers: [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae]
	I1221 20:21:52.345537  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.349727  229762 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 20:21:52.349792  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1221 20:21:52.388509  229762 cri.go:96] found id: "c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:52.388529  229762 cri.go:96] found id: ""
	I1221 20:21:52.388538  229762 logs.go:282] 1 containers: [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b]
	I1221 20:21:52.388575  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.392182  229762 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1221 20:21:52.392266  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1221 20:21:52.429151  229762 cri.go:96] found id: "a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:52.429173  229762 cri.go:96] found id: ""
	I1221 20:21:52.429183  229762 logs.go:282] 1 containers: [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77]
	I1221 20:21:52.429263  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.433019  229762 logs.go:123] Gathering logs for kube-proxy [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8] ...
	I1221 20:21:52.433036  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:52.482213  229762 logs.go:123] Gathering logs for container status ...
	I1221 20:21:52.482257  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 20:21:52.524687  229762 logs.go:123] Gathering logs for dmesg ...
	I1221 20:21:52.524714  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 20:21:52.543421  229762 logs.go:123] Gathering logs for etcd [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234] ...
	I1221 20:21:52.543460  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:52.586003  229762 logs.go:123] Gathering logs for kube-scheduler [03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226] ...
	I1221 20:21:52.586042  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:52.630589  229762 logs.go:123] Gathering logs for kube-controller-manager [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae] ...
	I1221 20:21:52.630622  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:52.671410  229762 logs.go:123] Gathering logs for kindnet [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b] ...
	I1221 20:21:52.671436  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:52.713424  229762 logs.go:123] Gathering logs for storage-provisioner [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77] ...
	I1221 20:21:52.713458  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:52.749137  229762 logs.go:123] Gathering logs for CRI-O ...
	I1221 20:21:52.749167  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 20:21:52.808426  229762 logs.go:123] Gathering logs for kubelet ...
	I1221 20:21:52.808462  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 20:21:52.912774  229762 logs.go:123] Gathering logs for describe nodes ...
	I1221 20:21:52.912804  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1221 20:21:52.976316  229762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1221 20:21:52.976340  229762 logs.go:123] Gathering logs for kube-apiserver [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7] ...
	I1221 20:21:52.976358  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:53.023899  229762 logs.go:123] Gathering logs for kube-scheduler [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3] ...
	I1221 20:21:53.023924  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:52.284037  263272 out.go:252] * Updating the running docker "pause-592353" container ...
	I1221 20:21:52.284073  263272 machine.go:94] provisionDockerMachine start ...
	I1221 20:21:52.284168  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.303301  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:52.303568  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:52.303583  263272 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:21:52.443186  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-592353
	
	I1221 20:21:52.443269  263272 ubuntu.go:182] provisioning hostname "pause-592353"
	I1221 20:21:52.443345  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.467065  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:52.467369  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:52.467389  263272 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-592353 && echo "pause-592353" | sudo tee /etc/hostname
	I1221 20:21:52.623449  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-592353
	
	I1221 20:21:52.623539  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.645017  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:52.645338  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:52.645367  263272 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-592353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-592353/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-592353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:21:52.785678  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:21:52.785708  263272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:21:52.785737  263272 ubuntu.go:190] setting up certificates
	I1221 20:21:52.785748  263272 provision.go:84] configureAuth start
	I1221 20:21:52.785805  263272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-592353
	I1221 20:21:52.806027  263272 provision.go:143] copyHostCerts
	I1221 20:21:52.806106  263272 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:21:52.806118  263272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:21:52.806185  263272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:21:52.806340  263272 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:21:52.806353  263272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:21:52.806383  263272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:21:52.806447  263272 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:21:52.806455  263272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:21:52.806478  263272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:21:52.806528  263272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.pause-592353 san=[127.0.0.1 192.168.85.2 localhost minikube pause-592353]
	I1221 20:21:52.839828  263272 provision.go:177] copyRemoteCerts
	I1221 20:21:52.839898  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:21:52.839944  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.858118  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:52.958044  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 20:21:52.979662  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:21:52.999500  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1221 20:21:53.017824  263272 provision.go:87] duration metric: took 232.063395ms to configureAuth
	I1221 20:21:53.017849  263272 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:21:53.018053  263272 config.go:182] Loaded profile config "pause-592353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:21:53.018166  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.037454  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:53.037762  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:53.037784  263272 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:21:53.353880  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:21:53.353906  263272 machine.go:97] duration metric: took 1.069826019s to provisionDockerMachine
	I1221 20:21:53.353917  263272 start.go:293] postStartSetup for "pause-592353" (driver="docker")
	I1221 20:21:53.353926  263272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:21:53.353983  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:21:53.354028  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.371991  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.469491  263272 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:21:53.472887  263272 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:21:53.472919  263272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:21:53.472933  263272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:21:53.472996  263272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:21:53.473092  263272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:21:53.473218  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:21:53.481185  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:21:53.499436  263272 start.go:296] duration metric: took 145.503948ms for postStartSetup
	I1221 20:21:53.499507  263272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:21:53.499552  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.517632  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.611145  263272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:21:53.615603  263272 fix.go:56] duration metric: took 1.354559475s for fixHost
	I1221 20:21:53.615631  263272 start.go:83] releasing machines lock for "pause-592353", held for 1.354606217s
	I1221 20:21:53.615701  263272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-592353
	I1221 20:21:53.633939  263272 ssh_runner.go:195] Run: cat /version.json
	I1221 20:21:53.633995  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.634016  263272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:21:53.634087  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.654786  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.655731  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.806036  263272 ssh_runner.go:195] Run: systemctl --version
	I1221 20:21:53.812583  263272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:21:53.848736  263272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:21:53.853585  263272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:21:53.853648  263272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:21:53.861257  263272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:21:53.861280  263272 start.go:496] detecting cgroup driver to use...
	I1221 20:21:53.861312  263272 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:21:53.861357  263272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:21:53.875038  263272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:21:53.886705  263272 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:21:53.886756  263272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:21:53.900558  263272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:21:53.911993  263272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:21:54.020150  263272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:21:54.123634  263272 docker.go:234] disabling docker service ...
	I1221 20:21:54.123705  263272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:21:54.138199  263272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:21:54.150458  263272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:21:54.255333  263272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:21:54.360766  263272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:21:54.373048  263272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:21:54.387057  263272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:21:54.387127  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.395367  263272 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:21:54.395418  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.403743  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.411953  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.420621  263272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:21:54.428539  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.437007  263272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.444982  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.453698  263272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:21:54.461409  263272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:21:54.468967  263272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:54.575453  263272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:21:54.742529  263272 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:21:54.742588  263272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:21:54.746480  263272 start.go:564] Will wait 60s for crictl version
	I1221 20:21:54.746537  263272 ssh_runner.go:195] Run: which crictl
	I1221 20:21:54.749998  263272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:21:54.774440  263272 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:21:54.774528  263272 ssh_runner.go:195] Run: crio --version
	I1221 20:21:54.801705  263272 ssh_runner.go:195] Run: crio --version
	I1221 20:21:54.830771  263272 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:21:52.573556  260566 out.go:252]   - Generating certificates and keys ...
	I1221 20:21:52.573652  260566 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 20:21:52.573744  260566 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 20:21:53.016740  260566 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 20:21:53.328944  260566 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 20:21:53.503553  260566 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 20:21:53.615670  260566 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 20:21:54.077499  260566 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 20:21:54.077703  260566 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-149976 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1221 20:21:54.142126  260566 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 20:21:54.142341  260566 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-149976 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1221 20:21:54.435472  260566 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 20:21:54.572710  260566 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 20:21:54.725512  260566 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 20:21:54.725611  260566 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 20:21:54.801275  260566 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 20:21:54.832089  263272 cli_runner.go:164] Run: docker network inspect pause-592353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:21:54.852701  263272 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1221 20:21:54.856903  263272 kubeadm.go:884] updating cluster {Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:21:54.857054  263272 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:21:54.857092  263272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:54.890869  263272 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:54.890889  263272 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:21:54.890928  263272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:54.916446  263272 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:54.916466  263272 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:21:54.916473  263272 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1221 20:21:54.916564  263272 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-592353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:21:54.916623  263272 ssh_runner.go:195] Run: crio config
	I1221 20:21:54.962610  263272 cni.go:84] Creating CNI manager for ""
	I1221 20:21:54.962631  263272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:21:54.962645  263272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:21:54.962673  263272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-592353 NodeName:pause-592353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:21:54.962819  263272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-592353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:21:54.962886  263272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:21:54.971076  263272 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:21:54.971145  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:21:54.978510  263272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1221 20:21:54.990874  263272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:21:55.004192  263272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1221 20:21:55.015957  263272 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:21:55.019623  263272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:55.125013  263272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:21:55.137282  263272 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353 for IP: 192.168.85.2
	I1221 20:21:55.137305  263272 certs.go:195] generating shared ca certs ...
	I1221 20:21:55.137320  263272 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:55.137480  263272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:21:55.137562  263272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:21:55.137580  263272 certs.go:257] generating profile certs ...
	I1221 20:21:55.137679  263272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.key
	I1221 20:21:55.137751  263272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/apiserver.key.e15d9711
	I1221 20:21:55.137805  263272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/proxy-client.key
	I1221 20:21:55.137947  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:21:55.137997  263272 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:21:55.138012  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:21:55.138047  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:21:55.138083  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:21:55.138117  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:21:55.138170  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:21:55.138741  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:21:55.156542  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:21:55.173124  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:21:55.189521  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:21:55.206933  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1221 20:21:55.224990  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:21:55.242013  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:21:55.258462  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:21:55.275238  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:21:55.291268  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:21:55.307456  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:21:55.322860  263272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:21:55.334485  263272 ssh_runner.go:195] Run: openssl version
	I1221 20:21:55.340313  263272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.346838  263272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:21:55.353446  263272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.357006  263272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.357074  263272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.390502  263272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:21:55.397782  263272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.404974  263272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:21:55.412210  263272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.416265  263272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.416312  263272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.450379  263272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:21:55.457745  263272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.465054  263272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:21:55.472164  263272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.475893  263272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.475946  263272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.512592  263272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:21:55.520636  263272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:21:55.525059  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:21:55.559306  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:21:55.593589  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:21:55.631571  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:21:55.671334  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:21:55.709616  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:21:55.746910  263272 kubeadm.go:401] StartCluster: {Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:55.747028  263272 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:21:55.747083  263272 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:21:55.777766  263272 cri.go:96] found id: "d7ddab942fcf30350719c79fe4e4da1c0344baa599e6f163ace8f40cf51716a7"
	I1221 20:21:55.777793  263272 cri.go:96] found id: "8d5f874a6621042cba99fbce56842b3962ec673f9bbedcd6afd28d968aedbc30"
	I1221 20:21:55.777799  263272 cri.go:96] found id: "42a6f973de3c4cd2665eefb628f1948c23aca56e3f9d1687e6a7f96eb4cbd6b8"
	I1221 20:21:55.777804  263272 cri.go:96] found id: "5231ce47f2d8f12d2622ea04f309e487bd672aaae1b69080127c64beafdec65d"
	I1221 20:21:55.777809  263272 cri.go:96] found id: "1a16fa514a1ef021231144a2510542320893d892df6c756403ccd3f12a41fb0c"
	I1221 20:21:55.777814  263272 cri.go:96] found id: "c3d9d9135faab4bd815eb6556f77257cab04249a3949c66ff3a7c8a7158a602c"
	I1221 20:21:55.777819  263272 cri.go:96] found id: "201d5aae363cad0f1dc034c2f10bf6a04bf4e952b700716cb2f85ef85d99e133"
	I1221 20:21:55.777823  263272 cri.go:96] found id: ""
	I1221 20:21:55.777877  263272 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:21:55.790212  263272 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:21:55Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:21:55.790307  263272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:21:55.798868  263272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:21:55.798889  263272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:21:55.798933  263272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:21:55.806573  263272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:21:55.807493  263272 kubeconfig.go:125] found "pause-592353" server: "https://192.168.85.2:8443"
	I1221 20:21:55.808773  263272 kapi.go:59] client config for pause-592353: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.key", CAFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 20:21:55.809321  263272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1221 20:21:55.809345  263272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1221 20:21:55.809352  263272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1221 20:21:55.809361  263272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1221 20:21:55.809367  263272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1221 20:21:55.809774  263272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:21:55.817042  263272 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1221 20:21:55.817074  263272 kubeadm.go:602] duration metric: took 18.17799ms to restartPrimaryControlPlane
	I1221 20:21:55.817084  263272 kubeadm.go:403] duration metric: took 70.183735ms to StartCluster
	I1221 20:21:55.817108  263272 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:55.817174  263272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:21:55.818499  263272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:55.818776  263272 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:21:55.818837  263272 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:21:55.819009  263272 config.go:182] Loaded profile config "pause-592353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:21:55.824535  263272 out.go:179] * Verifying Kubernetes components...
	I1221 20:21:55.824538  263272 out.go:179] * Enabled addons: 
	I1221 20:21:55.797555  260566 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 20:21:56.271892  260566 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 20:21:56.315623  260566 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 20:21:56.510955  260566 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 20:21:56.511583  260566 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 20:21:56.515075  260566 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 20:21:55.826572  263272 addons.go:530] duration metric: took 7.742462ms for enable addons: enabled=[]
	I1221 20:21:55.826580  263272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:55.950216  263272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:21:55.965888  263272 node_ready.go:35] waiting up to 6m0s for node "pause-592353" to be "Ready" ...
	I1221 20:21:55.974838  263272 node_ready.go:49] node "pause-592353" is "Ready"
	I1221 20:21:55.974868  263272 node_ready.go:38] duration metric: took 8.947576ms for node "pause-592353" to be "Ready" ...
	I1221 20:21:55.974885  263272 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:21:55.974934  263272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:21:55.987247  263272 api_server.go:72] duration metric: took 168.434824ms to wait for apiserver process to appear ...
	I1221 20:21:55.987272  263272 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:21:55.987292  263272 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1221 20:21:55.993009  263272 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1221 20:21:55.993955  263272 api_server.go:141] control plane version: v1.34.3
	I1221 20:21:55.993980  263272 api_server.go:131] duration metric: took 6.700305ms to wait for apiserver health ...
	I1221 20:21:55.993989  263272 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:21:55.996763  263272 system_pods.go:59] 7 kube-system pods found
	I1221 20:21:55.996791  263272 system_pods.go:61] "coredns-66bc5c9577-vfzl5" [3062e359-50b1-472d-bc8c-41564481dd9c] Running
	I1221 20:21:55.996799  263272 system_pods.go:61] "etcd-pause-592353" [4a6b784d-8cae-48ee-ad04-22b61eca649a] Running
	I1221 20:21:55.996805  263272 system_pods.go:61] "kindnet-fz2nh" [9a6c131b-77a7-4697-aa59-1106a4d885ac] Running
	I1221 20:21:55.996813  263272 system_pods.go:61] "kube-apiserver-pause-592353" [9d194f3b-8417-42a4-9ecd-5071a3d4d590] Running
	I1221 20:21:55.996828  263272 system_pods.go:61] "kube-controller-manager-pause-592353" [e364cd8e-1b00-4aa6-95d9-062063622a77] Running
	I1221 20:21:55.996835  263272 system_pods.go:61] "kube-proxy-j8r2s" [73638941-53c1-4078-aea3-e51da00fb427] Running
	I1221 20:21:55.996843  263272 system_pods.go:61] "kube-scheduler-pause-592353" [2d2bac2c-eeca-4e21-9e5b-46e1b126d279] Running
	I1221 20:21:55.996851  263272 system_pods.go:74] duration metric: took 2.854911ms to wait for pod list to return data ...
	I1221 20:21:55.996864  263272 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:21:55.999021  263272 default_sa.go:45] found service account: "default"
	I1221 20:21:55.999047  263272 default_sa.go:55] duration metric: took 2.173828ms for default service account to be created ...
	I1221 20:21:55.999059  263272 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:21:56.001898  263272 system_pods.go:86] 7 kube-system pods found
	I1221 20:21:56.001922  263272 system_pods.go:89] "coredns-66bc5c9577-vfzl5" [3062e359-50b1-472d-bc8c-41564481dd9c] Running
	I1221 20:21:56.001931  263272 system_pods.go:89] "etcd-pause-592353" [4a6b784d-8cae-48ee-ad04-22b61eca649a] Running
	I1221 20:21:56.001937  263272 system_pods.go:89] "kindnet-fz2nh" [9a6c131b-77a7-4697-aa59-1106a4d885ac] Running
	I1221 20:21:56.001954  263272 system_pods.go:89] "kube-apiserver-pause-592353" [9d194f3b-8417-42a4-9ecd-5071a3d4d590] Running
	I1221 20:21:56.001964  263272 system_pods.go:89] "kube-controller-manager-pause-592353" [e364cd8e-1b00-4aa6-95d9-062063622a77] Running
	I1221 20:21:56.001971  263272 system_pods.go:89] "kube-proxy-j8r2s" [73638941-53c1-4078-aea3-e51da00fb427] Running
	I1221 20:21:56.001977  263272 system_pods.go:89] "kube-scheduler-pause-592353" [2d2bac2c-eeca-4e21-9e5b-46e1b126d279] Running
	I1221 20:21:56.001986  263272 system_pods.go:126] duration metric: took 2.919437ms to wait for k8s-apps to be running ...
	I1221 20:21:56.001994  263272 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:21:56.002043  263272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:21:56.015337  263272 system_svc.go:56] duration metric: took 13.335334ms WaitForService to wait for kubelet
	I1221 20:21:56.015365  263272 kubeadm.go:587] duration metric: took 196.556893ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:21:56.015385  263272 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:21:56.017647  263272 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:21:56.017668  263272 node_conditions.go:123] node cpu capacity is 8
	I1221 20:21:56.017680  263272 node_conditions.go:105] duration metric: took 2.289962ms to run NodePressure ...
	I1221 20:21:56.017691  263272 start.go:242] waiting for startup goroutines ...
	I1221 20:21:56.017698  263272 start.go:247] waiting for cluster config update ...
	I1221 20:21:56.017704  263272 start.go:256] writing updated cluster config ...
	I1221 20:21:56.017970  263272 ssh_runner.go:195] Run: rm -f paused
	I1221 20:21:56.021879  263272 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:21:56.022716  263272 kapi.go:59] client config for pause-592353: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.key", CAFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 20:21:56.025336  263272 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfzl5" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.029679  263272 pod_ready.go:94] pod "coredns-66bc5c9577-vfzl5" is "Ready"
	I1221 20:21:56.029701  263272 pod_ready.go:86] duration metric: took 4.343266ms for pod "coredns-66bc5c9577-vfzl5" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.031630  263272 pod_ready.go:83] waiting for pod "etcd-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.035192  263272 pod_ready.go:94] pod "etcd-pause-592353" is "Ready"
	I1221 20:21:56.035211  263272 pod_ready.go:86] duration metric: took 3.561411ms for pod "etcd-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.037299  263272 pod_ready.go:83] waiting for pod "kube-apiserver-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.040876  263272 pod_ready.go:94] pod "kube-apiserver-pause-592353" is "Ready"
	I1221 20:21:56.040897  263272 pod_ready.go:86] duration metric: took 3.577184ms for pod "kube-apiserver-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.042677  263272 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.427038  263272 pod_ready.go:94] pod "kube-controller-manager-pause-592353" is "Ready"
	I1221 20:21:56.427067  263272 pod_ready.go:86] duration metric: took 384.371534ms for pod "kube-controller-manager-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.625790  263272 pod_ready.go:83] waiting for pod "kube-proxy-j8r2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.026359  263272 pod_ready.go:94] pod "kube-proxy-j8r2s" is "Ready"
	I1221 20:21:57.026386  263272 pod_ready.go:86] duration metric: took 400.573697ms for pod "kube-proxy-j8r2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.226277  263272 pod_ready.go:83] waiting for pod "kube-scheduler-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.626613  263272 pod_ready.go:94] pod "kube-scheduler-pause-592353" is "Ready"
	I1221 20:21:57.626646  263272 pod_ready.go:86] duration metric: took 400.340853ms for pod "kube-scheduler-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.626663  263272 pod_ready.go:40] duration metric: took 1.604756714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:21:57.679471  263272 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:21:57.684341  263272 out.go:179] * Done! kubectl is now configured to use "pause-592353" cluster and "default" namespace by default
	I1221 20:21:55.599297  229762 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:21:55.599714  229762 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1221 20:21:55.599759  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 20:21:55.599806  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1221 20:21:55.634826  229762 cri.go:96] found id: "834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:55.634848  229762 cri.go:96] found id: ""
	I1221 20:21:55.634858  229762 logs.go:282] 1 containers: [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7]
	I1221 20:21:55.634908  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.638665  229762 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 20:21:55.638724  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1221 20:21:55.673795  229762 cri.go:96] found id: "1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:55.673813  229762 cri.go:96] found id: ""
	I1221 20:21:55.673821  229762 logs.go:282] 1 containers: [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234]
	I1221 20:21:55.673870  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.677600  229762 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 20:21:55.677653  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1221 20:21:55.712159  229762 cri.go:96] found id: ""
	I1221 20:21:55.712179  229762 logs.go:282] 0 containers: []
	W1221 20:21:55.712187  229762 logs.go:284] No container was found matching "coredns"
	I1221 20:21:55.712193  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 20:21:55.712254  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1221 20:21:55.747249  229762 cri.go:96] found id: "0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:55.747268  229762 cri.go:96] found id: "03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:55.747274  229762 cri.go:96] found id: ""
	I1221 20:21:55.747284  229762 logs.go:282] 2 containers: [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226]
	I1221 20:21:55.747330  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.751172  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.754440  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 20:21:55.754502  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1221 20:21:55.792146  229762 cri.go:96] found id: "67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:55.792165  229762 cri.go:96] found id: ""
	I1221 20:21:55.792174  229762 logs.go:282] 1 containers: [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8]
	I1221 20:21:55.792248  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.795836  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 20:21:55.795898  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1221 20:21:55.832213  229762 cri.go:96] found id: "6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:55.832260  229762 cri.go:96] found id: ""
	I1221 20:21:55.832269  229762 logs.go:282] 1 containers: [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae]
	I1221 20:21:55.832317  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.836127  229762 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 20:21:55.836185  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1221 20:21:55.878346  229762 cri.go:96] found id: "c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:55.878369  229762 cri.go:96] found id: ""
	I1221 20:21:55.878378  229762 logs.go:282] 1 containers: [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b]
	I1221 20:21:55.878432  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.881986  229762 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1221 20:21:55.882056  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1221 20:21:55.916788  229762 cri.go:96] found id: "a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:55.916809  229762 cri.go:96] found id: ""
	I1221 20:21:55.916815  229762 logs.go:282] 1 containers: [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77]
	I1221 20:21:55.916865  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.920383  229762 logs.go:123] Gathering logs for kube-controller-manager [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae] ...
	I1221 20:21:55.920406  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:55.962110  229762 logs.go:123] Gathering logs for storage-provisioner [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77] ...
	I1221 20:21:55.962137  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:56.005217  229762 logs.go:123] Gathering logs for dmesg ...
	I1221 20:21:56.005266  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 20:21:56.022025  229762 logs.go:123] Gathering logs for kube-proxy [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8] ...
	I1221 20:21:56.022049  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:56.069736  229762 logs.go:123] Gathering logs for kindnet [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b] ...
	I1221 20:21:56.069767  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:56.109154  229762 logs.go:123] Gathering logs for CRI-O ...
	I1221 20:21:56.109190  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 20:21:56.164484  229762 logs.go:123] Gathering logs for container status ...
	I1221 20:21:56.164516  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 20:21:56.206018  229762 logs.go:123] Gathering logs for kubelet ...
	I1221 20:21:56.206043  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 20:21:56.307204  229762 logs.go:123] Gathering logs for describe nodes ...
	I1221 20:21:56.307249  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1221 20:21:56.367213  229762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1221 20:21:56.367249  229762 logs.go:123] Gathering logs for kube-apiserver [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7] ...
	I1221 20:21:56.367265  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:56.403958  229762 logs.go:123] Gathering logs for etcd [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234] ...
	I1221 20:21:56.403986  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:56.445457  229762 logs.go:123] Gathering logs for kube-scheduler [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3] ...
	I1221 20:21:56.445486  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:56.513482  229762 logs.go:123] Gathering logs for kube-scheduler [03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226] ...
	I1221 20:21:56.513515  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	
	
	==> CRI-O <==
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.659409342Z" level=info msg="RDT not available in the host system"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.659425474Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.660217849Z" level=info msg="Conmon does support the --sync option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.660246661Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.660263997Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.661162831Z" level=info msg="Conmon does support the --sync option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.661182857Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.664815633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.664838374Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.665576463Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.665977862Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.666018075Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.736870319Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-vfzl5 Namespace:kube-system ID:64e76d1304030a76c17da55d05a94ce0677375453ff6fa163c1d84abf9210421 UID:3062e359-50b1-472d-bc8c-41564481dd9c NetNS:/var/run/netns/0c21b218-ac64-4707-9dca-2136e791543f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006263d0}] Aliases:map[]}"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.737074537Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-vfzl5 for CNI network kindnet (type=ptp)"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738150686Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738244682Z" level=info msg="Starting seccomp notifier watcher"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738377529Z" level=info msg="Create NRI interface"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738852629Z" level=info msg="built-in NRI default validator is disabled"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738878035Z" level=info msg="runtime interface created"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738891907Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738900177Z" level=info msg="runtime interface starting up..."
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738908144Z" level=info msg="starting plugins..."
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738921323Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.739278278Z" level=info msg="No systemd watchdog enabled"
	Dec 21 20:21:54 pause-592353 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d7ddab942fcf3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     11 seconds ago      Running             coredns                   0                   64e76d1304030       coredns-66bc5c9577-vfzl5               kube-system
	8d5f874a66210       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   22 seconds ago      Running             kindnet-cni               0                   b1b209ece555e       kindnet-fz2nh                          kube-system
	42a6f973de3c4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     24 seconds ago      Running             kube-proxy                0                   c0f0bd72ff2ea       kube-proxy-j8r2s                       kube-system
	5231ce47f2d8f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     34 seconds ago      Running             kube-scheduler            0                   157a40dbeee5a       kube-scheduler-pause-592353            kube-system
	1a16fa514a1ef       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     34 seconds ago      Running             etcd                      0                   3c471a4586535       etcd-pause-592353                      kube-system
	c3d9d9135faab       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     34 seconds ago      Running             kube-apiserver            0                   3a85cc0609c9c       kube-apiserver-pause-592353            kube-system
	201d5aae363ca       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     34 seconds ago      Running             kube-controller-manager   0                   c7cee7a4e2e49       kube-controller-manager-pause-592353   kube-system
	
	
	==> coredns [d7ddab942fcf30350719c79fe4e4da1c0344baa599e6f163ace8f40cf51716a7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50619 - 8314 "HINFO IN 4954424143234619073.6872614671334231471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.459529952s
	
	
	==> describe nodes <==
	Name:               pause-592353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-592353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=pause-592353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_21_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:21:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-592353
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:21:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-592353
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                4107c20a-45c6-43e4-840d-321036df5d2f
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-vfzl5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-592353                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-fz2nh                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-592353             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-592353    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-j8r2s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-592353             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-592353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-592353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-592353 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-592353 event: Registered Node pause-592353 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-592353 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.085350] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025061] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.894686] kauditd_printk_skb: 47 callbacks suppressed
	[Dec21 19:48] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.000151] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023871] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023899] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +2.047760] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +4.031573] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +8.255179] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 19:49] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[ +32.252695] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	
	
	==> etcd [1a16fa514a1ef021231144a2510542320893d892df6c756403ccd3f12a41fb0c] <==
	{"level":"warn","ts":"2025-12-21T20:21:27.018918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.026718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.035260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.042692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.048813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.055254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.065940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.071873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.077877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.085249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.098319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.105618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.112783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.120944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.127184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.133652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.139853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.146793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.153162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.159347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.166610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.178634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.192331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.241649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56310","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-21T20:21:44.799209Z","caller":"traceutil/trace.go:172","msg":"trace[1504529588] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"189.702781ms","start":"2025-12-21T20:21:44.609491Z","end":"2025-12-21T20:21:44.799194Z","steps":["trace[1504529588] 'process raft request'  (duration: 189.573752ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:22:00 up  1:04,  0 user,  load average: 2.49, 2.92, 2.10
	Linux pause-592353 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d5f874a6621042cba99fbce56842b3962ec673f9bbedcd6afd28d968aedbc30] <==
	I1221 20:21:37.933279       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:21:37.933788       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1221 20:21:37.933943       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:21:37.933976       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:21:37.934005       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:21:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:21:38.135247       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:21:38.135278       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:21:38.135291       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:21:38.136147       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:21:38.510679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:21:38.510713       1 metrics.go:72] Registering metrics
	I1221 20:21:38.510815       1 controller.go:711] "Syncing nftables rules"
	I1221 20:21:48.136219       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:21:48.136331       1 main.go:301] handling current node
	I1221 20:21:58.141324       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:21:58.141366       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c3d9d9135faab4bd815eb6556f77257cab04249a3949c66ff3a7c8a7158a602c] <==
	I1221 20:21:27.787645       1 policy_source.go:240] refreshing policies
	E1221 20:21:27.807652       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1221 20:21:27.855050       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:21:27.857592       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:27.857718       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1221 20:21:27.864577       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:27.864794       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 20:21:27.953551       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:21:28.657604       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1221 20:21:28.661015       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1221 20:21:28.661035       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:21:29.097384       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:21:29.130913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:21:29.263618       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 20:21:29.269514       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1221 20:21:29.270564       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:21:29.275119       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:21:29.675553       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:21:30.085034       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:21:30.093378       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 20:21:30.101954       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:21:35.378387       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:21:35.678948       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:35.682421       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:35.727867       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [201d5aae363cad0f1dc034c2f10bf6a04bf4e952b700716cb2f85ef85d99e133] <==
	I1221 20:21:34.675217       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1221 20:21:34.676299       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1221 20:21:34.676409       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1221 20:21:34.676509       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1221 20:21:34.676564       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1221 20:21:34.676578       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:21:34.677092       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1221 20:21:34.677195       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1221 20:21:34.678063       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1221 20:21:34.678094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1221 20:21:34.678117       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1221 20:21:34.678726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1221 20:21:34.679408       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 20:21:34.681009       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 20:21:34.681059       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 20:21:34.681100       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 20:21:34.681113       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 20:21:34.681130       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 20:21:34.683330       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1221 20:21:34.687136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:21:34.687276       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-592353" podCIDRs=["10.244.0.0/24"]
	I1221 20:21:34.690353       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 20:21:34.697694       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 20:21:34.704023       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:21:49.678170       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [42a6f973de3c4cd2665eefb628f1948c23aca56e3f9d1687e6a7f96eb4cbd6b8] <==
	I1221 20:21:36.151026       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:21:36.237684       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:21:36.338366       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:21:36.338410       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1221 20:21:36.338548       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:21:36.360556       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:21:36.360603       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:21:36.366793       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:21:36.367280       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:21:36.367305       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:21:36.368869       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:21:36.368892       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:21:36.368942       1 config.go:200] "Starting service config controller"
	I1221 20:21:36.368958       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:21:36.368943       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:21:36.368976       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:21:36.369034       1 config.go:309] "Starting node config controller"
	I1221 20:21:36.369050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:21:36.369059       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:21:36.469386       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:21:36.469417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:21:36.469431       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5231ce47f2d8f12d2622ea04f309e487bd672aaae1b69080127c64beafdec65d] <==
	I1221 20:21:28.128643       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:21:28.130468       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:21:28.130509       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:21:28.130845       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:21:28.130916       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1221 20:21:28.131797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1221 20:21:28.132275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1221 20:21:28.132545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 20:21:28.132673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 20:21:28.134107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 20:21:28.134214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 20:21:28.134276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1221 20:21:28.134410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 20:21:28.134463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 20:21:28.134512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1221 20:21:28.134678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 20:21:28.134697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 20:21:28.134730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1221 20:21:28.134759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 20:21:28.134834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 20:21:28.134908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1221 20:21:28.134909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1221 20:21:28.134931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 20:21:28.135063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1221 20:21:29.230601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:21:30 pause-592353 kubelet[1330]: I1221 20:21:30.988855    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-592353" podStartSLOduration=2.988173539 podStartE2EDuration="2.988173539s" podCreationTimestamp="2025-12-21 20:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:30.974485624 +0000 UTC m=+1.129328529" watchObservedRunningTime="2025-12-21 20:21:30.988173539 +0000 UTC m=+1.143016438"
	Dec 21 20:21:31 pause-592353 kubelet[1330]: I1221 20:21:31.000347    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-592353" podStartSLOduration=3.000325487 podStartE2EDuration="3.000325487s" podCreationTimestamp="2025-12-21 20:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:30.989137479 +0000 UTC m=+1.143980380" watchObservedRunningTime="2025-12-21 20:21:31.000325487 +0000 UTC m=+1.155168382"
	Dec 21 20:21:31 pause-592353 kubelet[1330]: I1221 20:21:31.009805    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-592353" podStartSLOduration=1.009784429 podStartE2EDuration="1.009784429s" podCreationTimestamp="2025-12-21 20:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:31.000764121 +0000 UTC m=+1.155607020" watchObservedRunningTime="2025-12-21 20:21:31.009784429 +0000 UTC m=+1.164627331"
	Dec 21 20:21:31 pause-592353 kubelet[1330]: I1221 20:21:31.009937    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-592353" podStartSLOduration=1.009929671 podStartE2EDuration="1.009929671s" podCreationTimestamp="2025-12-21 20:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:31.009572568 +0000 UTC m=+1.164415471" watchObservedRunningTime="2025-12-21 20:21:31.009929671 +0000 UTC m=+1.164772574"
	Dec 21 20:21:34 pause-592353 kubelet[1330]: I1221 20:21:34.765636    1330 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 21 20:21:34 pause-592353 kubelet[1330]: I1221 20:21:34.766332    1330 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760535    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73638941-53c1-4078-aea3-e51da00fb427-xtables-lock\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760569    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73638941-53c1-4078-aea3-e51da00fb427-lib-modules\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760586    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73638941-53c1-4078-aea3-e51da00fb427-kube-proxy\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760608    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr2j9\" (UniqueName: \"kubernetes.io/projected/73638941-53c1-4078-aea3-e51da00fb427-kube-api-access-dr2j9\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861652    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbt5\" (UniqueName: \"kubernetes.io/projected/9a6c131b-77a7-4697-aa59-1106a4d885ac-kube-api-access-wrbt5\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861697    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a6c131b-77a7-4697-aa59-1106a4d885ac-xtables-lock\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861768    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a6c131b-77a7-4697-aa59-1106a4d885ac-lib-modules\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861915    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9a6c131b-77a7-4697-aa59-1106a4d885ac-cni-cfg\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:36 pause-592353 kubelet[1330]: I1221 20:21:36.975935    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j8r2s" podStartSLOduration=1.9759142729999999 podStartE2EDuration="1.975914273s" podCreationTimestamp="2025-12-21 20:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:36.975792243 +0000 UTC m=+7.130635146" watchObservedRunningTime="2025-12-21 20:21:36.975914273 +0000 UTC m=+7.130757177"
	Dec 21 20:21:38 pause-592353 kubelet[1330]: I1221 20:21:38.602056    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fz2nh" podStartSLOduration=1.900073106 podStartE2EDuration="3.602034866s" podCreationTimestamp="2025-12-21 20:21:35 +0000 UTC" firstStartedPulling="2025-12-21 20:21:36.063213689 +0000 UTC m=+6.218056582" lastFinishedPulling="2025-12-21 20:21:37.765175449 +0000 UTC m=+7.920018342" observedRunningTime="2025-12-21 20:21:37.980201923 +0000 UTC m=+8.135044821" watchObservedRunningTime="2025-12-21 20:21:38.602034866 +0000 UTC m=+8.756877767"
	Dec 21 20:21:48 pause-592353 kubelet[1330]: I1221 20:21:48.675443    1330 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 21 20:21:48 pause-592353 kubelet[1330]: I1221 20:21:48.754653    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3062e359-50b1-472d-bc8c-41564481dd9c-config-volume\") pod \"coredns-66bc5c9577-vfzl5\" (UID: \"3062e359-50b1-472d-bc8c-41564481dd9c\") " pod="kube-system/coredns-66bc5c9577-vfzl5"
	Dec 21 20:21:48 pause-592353 kubelet[1330]: I1221 20:21:48.754707    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgh4\" (UniqueName: \"kubernetes.io/projected/3062e359-50b1-472d-bc8c-41564481dd9c-kube-api-access-wjgh4\") pod \"coredns-66bc5c9577-vfzl5\" (UID: \"3062e359-50b1-472d-bc8c-41564481dd9c\") " pod="kube-system/coredns-66bc5c9577-vfzl5"
	Dec 21 20:21:50 pause-592353 kubelet[1330]: I1221 20:21:50.004168    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vfzl5" podStartSLOduration=15.004147268 podStartE2EDuration="15.004147268s" podCreationTimestamp="2025-12-21 20:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:50.003862505 +0000 UTC m=+20.158705419" watchObservedRunningTime="2025-12-21 20:21:50.004147268 +0000 UTC m=+20.158990168"
	Dec 21 20:21:54 pause-592353 kubelet[1330]: E1221 20:21:54.946884    1330 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Dec 21 20:21:58 pause-592353 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:21:58 pause-592353 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:21:58 pause-592353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:21:58 pause-592353 systemd[1]: kubelet.service: Consumed 1.213s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-592353 -n pause-592353
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-592353 -n pause-592353: exit status 2 (340.046571ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-592353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-592353
helpers_test.go:244: (dbg) docker inspect pause-592353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517",
	        "Created": "2025-12-21T20:21:14.812580296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255190,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:21:14.850690371Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/hosts",
	        "LogPath": "/var/lib/docker/containers/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517/e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517-json.log",
	        "Name": "/pause-592353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-592353:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-592353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5171eb31530e3f8b818896ef82ad96e27313017b2ea1f3dbcff86a7e3f30517",
	                "LowerDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25f0fcf13cfa0b52fcb5afcb67e5d0340c51b620891122acc38c62f4aa249c66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-592353",
	                "Source": "/var/lib/docker/volumes/pause-592353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-592353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-592353",
	                "name.minikube.sigs.k8s.io": "pause-592353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5bcf56d7821acdc2d87a533566f5d1428162b773aa45bcc640b7f233246ebc27",
	            "SandboxKey": "/var/run/docker/netns/5bcf56d7821a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-592353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9fa2c6c36158ec4453936581b63a2d728a2a1fa9f3e30e177dbd4ba7230cda13",
	                    "EndpointID": "b1c26713c5cb69eec074eeec788b8e0d83e7770b7cb20cd758d277434a215979",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ea:72:eb:b5:4e:77",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-592353",
	                        "e5171eb31530"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-592353 -n pause-592353
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-592353 -n pause-592353: exit status 2 (335.888483ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-592353 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-149976 sudo crio config                                                                                                                                                                                         │ cilium-149976             │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │                     │
	│ delete  │ -p cilium-149976                                                                                                                                                                                                          │ cilium-149976             │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ start   │ -p force-systemd-env-558127 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-558127  │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ start   │ -p running-upgrade-707221 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-707221    │ jenkins │ v1.35.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ delete  │ -p force-systemd-env-558127                                                                                                                                                                                               │ force-systemd-env-558127  │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:18 UTC │
	│ start   │ -p running-upgrade-707221 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-707221    │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │                     │
	│ start   │ -p test-preload-115092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio                                                                                              │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:18 UTC │ 21 Dec 25 20:19 UTC │
	│ start   │ -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-291108 │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │                     │
	│ start   │ -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                             │ kubernetes-upgrade-291108 │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ image   │ test-preload-115092 image pull public.ecr.aws/docker/library/busybox:latest                                                                                                                                               │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ stop    │ -p test-preload-115092                                                                                                                                                                                                    │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ delete  │ -p kubernetes-upgrade-291108                                                                                                                                                                                              │ kubernetes-upgrade-291108 │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:19 UTC │
	│ start   │ -p cert-expiration-026803 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-026803    │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:20 UTC │
	│ start   │ -p test-preload-115092 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                                                                                        │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:19 UTC │ 21 Dec 25 20:20 UTC │
	│ image   │ test-preload-115092 image list                                                                                                                                                                                            │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:20 UTC │ 21 Dec 25 20:20 UTC │
	│ delete  │ -p test-preload-115092                                                                                                                                                                                                    │ test-preload-115092       │ jenkins │ v1.37.0 │ 21 Dec 25 20:20 UTC │ 21 Dec 25 20:20 UTC │
	│ start   │ -p cert-options-746684 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:20 UTC │ 21 Dec 25 20:21 UTC │
	│ ssh     │ cert-options-746684 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ ssh     │ -p cert-options-746684 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ delete  │ -p cert-options-746684                                                                                                                                                                                                    │ cert-options-746684       │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ start   │ -p pause-592353 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-592353              │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ delete  │ -p stopped-upgrade-611850                                                                                                                                                                                                 │ stopped-upgrade-611850    │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ start   │ -p auto-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-149976               │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │                     │
	│ start   │ -p pause-592353 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-592353              │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │ 21 Dec 25 20:21 UTC │
	│ pause   │ -p pause-592353 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-592353              │ jenkins │ v1.37.0 │ 21 Dec 25 20:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:21:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:21:52.053544  263272 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:21:52.053771  263272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:21:52.053779  263272 out.go:374] Setting ErrFile to fd 2...
	I1221 20:21:52.053783  263272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:21:52.053971  263272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:21:52.054423  263272 out.go:368] Setting JSON to false
	I1221 20:21:52.055507  263272 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3861,"bootTime":1766344651,"procs":387,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:21:52.055572  263272 start.go:143] virtualization: kvm guest
	I1221 20:21:52.057812  263272 out.go:179] * [pause-592353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:21:52.059529  263272 notify.go:221] Checking for updates...
	I1221 20:21:52.059535  263272 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:21:52.060915  263272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:21:52.062079  263272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:21:52.063718  263272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:21:52.064932  263272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:21:52.066149  263272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:21:52.067924  263272 config.go:182] Loaded profile config "pause-592353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:21:52.068656  263272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:21:52.094017  263272 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:21:52.094161  263272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:21:52.157190  263272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-21 20:21:52.146866188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:21:52.157314  263272 docker.go:319] overlay module found
	I1221 20:21:52.160156  263272 out.go:179] * Using the docker driver based on existing profile
	I1221 20:21:52.161376  263272 start.go:309] selected driver: docker
	I1221 20:21:52.161394  263272 start.go:928] validating driver "docker" against &{Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:52.161519  263272 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:21:52.161600  263272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:21:52.228714  263272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-21 20:21:52.217806539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:21:52.229551  263272 cni.go:84] Creating CNI manager for ""
	I1221 20:21:52.229643  263272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:21:52.229711  263272 start.go:353] cluster config:
	{Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:52.233555  263272 out.go:179] * Starting "pause-592353" primary control-plane node in "pause-592353" cluster
	I1221 20:21:52.234609  263272 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:21:52.236120  263272 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:21:52.237183  263272 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:21:52.237248  263272 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 20:21:52.237269  263272 cache.go:65] Caching tarball of preloaded images
	I1221 20:21:52.237278  263272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:21:52.237356  263272 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:21:52.237370  263272 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 20:21:52.237532  263272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/config.json ...
	I1221 20:21:52.260880  263272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:21:52.260901  263272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:21:52.260922  263272 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:21:52.260955  263272 start.go:360] acquireMachinesLock for pause-592353: {Name:mk82f022bb0c28df78da9902d0a1772d3ef40883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:21:52.261016  263272 start.go:364] duration metric: took 40.452µs to acquireMachinesLock for "pause-592353"
	I1221 20:21:52.261031  263272 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:21:52.261037  263272 fix.go:54] fixHost starting: 
	I1221 20:21:52.261342  263272 cli_runner.go:164] Run: docker container inspect pause-592353 --format={{.State.Status}}
	I1221 20:21:52.282137  263272 fix.go:112] recreateIfNeeded on pause-592353: state=Running err=<nil>
	W1221 20:21:52.282168  263272 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 20:21:51.174269  260566 cli_runner.go:164] Run: docker network inspect auto-149976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:21:51.190867  260566 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1221 20:21:51.194801  260566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:21:51.204533  260566 kubeadm.go:884] updating cluster {Name:auto-149976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-149976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:21:51.204653  260566 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:21:51.204707  260566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:51.235457  260566 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:51.235475  260566 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:21:51.235515  260566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:51.260081  260566 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:51.260100  260566 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:21:51.260108  260566 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.3 crio true true} ...
	I1221 20:21:51.260185  260566 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-149976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:auto-149976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:21:51.260269  260566 ssh_runner.go:195] Run: crio config
	I1221 20:21:51.303157  260566 cni.go:84] Creating CNI manager for ""
	I1221 20:21:51.303178  260566 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:21:51.303193  260566 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:21:51.303218  260566 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-149976 NodeName:auto-149976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:21:51.303359  260566 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-149976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:21:51.303417  260566 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:21:51.311269  260566 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:21:51.311323  260566 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:21:51.318634  260566 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1221 20:21:51.330800  260566 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:21:51.346534  260566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1221 20:21:51.358697  260566 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:21:51.362008  260566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:21:51.371512  260566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:51.450514  260566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:21:51.473582  260566 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976 for IP: 192.168.103.2
	I1221 20:21:51.473602  260566 certs.go:195] generating shared ca certs ...
	I1221 20:21:51.473623  260566 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.473757  260566 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:21:51.473795  260566 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:21:51.473804  260566 certs.go:257] generating profile certs ...
	I1221 20:21:51.473874  260566 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.key
	I1221 20:21:51.473889  260566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt with IP's: []
	I1221 20:21:51.617372  260566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt ...
	I1221 20:21:51.617405  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: {Name:mk32f1716e31081c3f1f92da82e77097218f4068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.617595  260566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.key ...
	I1221 20:21:51.617610  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.key: {Name:mkd18499cb9f20f33985edd153ec67d23828a67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.617708  260566 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c
	I1221 20:21:51.617726  260566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1221 20:21:51.680901  260566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c ...
	I1221 20:21:51.680930  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c: {Name:mk15e432ff9ff634bc3f2a4390091f10f87d3534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.681098  260566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c ...
	I1221 20:21:51.681118  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c: {Name:mk207e22acff0c833c7904639ed2437c73ed6a32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.681248  260566 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt.500a340c -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt
	I1221 20:21:51.681349  260566 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key.500a340c -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key
	I1221 20:21:51.681417  260566 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key
	I1221 20:21:51.681435  260566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt with IP's: []
	I1221 20:21:51.829188  260566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt ...
	I1221 20:21:51.829220  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt: {Name:mk026f66d1c1ef478db7d7b0f10f18c53c53b91c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.829420  260566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key ...
	I1221 20:21:51.829436  260566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key: {Name:mk66d9f7850635be7883902a76da2ab65c9d9490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:51.829635  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:21:51.829675  260566 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:21:51.829691  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:21:51.829718  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:21:51.829746  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:21:51.829774  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:21:51.829822  260566 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:21:51.830424  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:21:51.848169  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:21:51.865173  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:21:51.881617  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:21:51.897948  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1221 20:21:51.914497  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 20:21:51.931039  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:21:51.950531  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1221 20:21:51.967984  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:21:51.987234  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:21:52.006577  260566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:21:52.023317  260566 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:21:52.036360  260566 ssh_runner.go:195] Run: openssl version
	I1221 20:21:52.043100  260566 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.051316  260566 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:21:52.059029  260566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.062737  260566 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.062796  260566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:21:52.105105  260566 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:21:52.115709  260566 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127112.pem /etc/ssl/certs/3ec20f2e.0
	I1221 20:21:52.127207  260566 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.135659  260566 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:21:52.145260  260566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.149406  260566 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.149470  260566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:52.197818  260566 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:21:52.208440  260566 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 20:21:52.218903  260566 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.228438  260566 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:21:52.237622  260566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.242113  260566 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.242189  260566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:21:52.286846  260566 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:21:52.294926  260566 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12711.pem /etc/ssl/certs/51391683.0
	I1221 20:21:52.303593  260566 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:21:52.307470  260566 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 20:21:52.307530  260566 kubeadm.go:401] StartCluster: {Name:auto-149976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-149976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:52.307610  260566 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:21:52.307667  260566 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:21:52.336768  260566 cri.go:96] found id: ""
	I1221 20:21:52.336834  260566 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:21:52.345879  260566 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 20:21:52.354267  260566 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 20:21:52.354322  260566 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 20:21:52.362854  260566 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 20:21:52.362874  260566 kubeadm.go:158] found existing configuration files:
	
	I1221 20:21:52.362919  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 20:21:52.370523  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 20:21:52.370582  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 20:21:52.378219  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 20:21:52.386946  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 20:21:52.386992  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 20:21:52.394863  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 20:21:52.402521  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 20:21:52.402564  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 20:21:52.409795  260566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 20:21:52.417534  260566 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 20:21:52.417588  260566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 20:21:52.425990  260566 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 20:21:52.470845  260566 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1221 20:21:52.470929  260566 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 20:21:52.495437  260566 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1221 20:21:52.495528  260566 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1221 20:21:52.495621  260566 kubeadm.go:319] OS: Linux
	I1221 20:21:52.495688  260566 kubeadm.go:319] CGROUPS_CPU: enabled
	I1221 20:21:52.495766  260566 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1221 20:21:52.495842  260566 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1221 20:21:52.495913  260566 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1221 20:21:52.495986  260566 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1221 20:21:52.496079  260566 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1221 20:21:52.496157  260566 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1221 20:21:52.496265  260566 kubeadm.go:319] CGROUPS_IO: enabled
	I1221 20:21:52.564313  260566 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 20:21:52.564487  260566 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 20:21:52.564626  260566 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 20:21:52.571671  260566 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 20:21:48.625316  229762 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:21:48.625731  229762 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1221 20:21:48.625784  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 20:21:48.625835  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1221 20:21:48.661558  229762 cri.go:96] found id: "834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:48.661580  229762 cri.go:96] found id: ""
	I1221 20:21:48.661587  229762 logs.go:282] 1 containers: [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7]
	I1221 20:21:48.661629  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.665245  229762 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 20:21:48.665314  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1221 20:21:48.707346  229762 cri.go:96] found id: "1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:48.707373  229762 cri.go:96] found id: ""
	I1221 20:21:48.707385  229762 logs.go:282] 1 containers: [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234]
	I1221 20:21:48.707455  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.711789  229762 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 20:21:48.711849  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1221 20:21:48.747380  229762 cri.go:96] found id: ""
	I1221 20:21:48.747408  229762 logs.go:282] 0 containers: []
	W1221 20:21:48.747419  229762 logs.go:284] No container was found matching "coredns"
	I1221 20:21:48.747428  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 20:21:48.747489  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1221 20:21:48.781029  229762 cri.go:96] found id: "0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:48.781051  229762 cri.go:96] found id: "03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:48.781054  229762 cri.go:96] found id: ""
	I1221 20:21:48.781061  229762 logs.go:282] 2 containers: [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226]
	I1221 20:21:48.781117  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.784754  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.788703  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 20:21:48.788756  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1221 20:21:48.823779  229762 cri.go:96] found id: "67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:48.823803  229762 cri.go:96] found id: ""
	I1221 20:21:48.823812  229762 logs.go:282] 1 containers: [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8]
	I1221 20:21:48.823865  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.827905  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 20:21:48.827969  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1221 20:21:48.862913  229762 cri.go:96] found id: "6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:48.862939  229762 cri.go:96] found id: ""
	I1221 20:21:48.862953  229762 logs.go:282] 1 containers: [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae]
	I1221 20:21:48.863013  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.867339  229762 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 20:21:48.867402  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1221 20:21:48.904013  229762 cri.go:96] found id: "c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:48.904029  229762 cri.go:96] found id: ""
	I1221 20:21:48.904036  229762 logs.go:282] 1 containers: [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b]
	I1221 20:21:48.904079  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.907634  229762 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1221 20:21:48.907683  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1221 20:21:48.942728  229762 cri.go:96] found id: "a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:48.942748  229762 cri.go:96] found id: ""
	I1221 20:21:48.942755  229762 logs.go:282] 1 containers: [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77]
	I1221 20:21:48.942808  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:48.946866  229762 logs.go:123] Gathering logs for dmesg ...
	I1221 20:21:48.946889  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 20:21:48.962679  229762 logs.go:123] Gathering logs for describe nodes ...
	I1221 20:21:48.962707  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1221 20:21:49.024967  229762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1221 20:21:49.024990  229762 logs.go:123] Gathering logs for etcd [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234] ...
	I1221 20:21:49.025006  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:49.072249  229762 logs.go:123] Gathering logs for kube-scheduler [03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226] ...
	I1221 20:21:49.072291  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:49.120367  229762 logs.go:123] Gathering logs for kube-proxy [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8] ...
	I1221 20:21:49.120400  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:49.164972  229762 logs.go:123] Gathering logs for kube-controller-manager [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae] ...
	I1221 20:21:49.165008  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:49.201081  229762 logs.go:123] Gathering logs for kindnet [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b] ...
	I1221 20:21:49.201107  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:49.239157  229762 logs.go:123] Gathering logs for kube-apiserver [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7] ...
	I1221 20:21:49.239192  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:49.279543  229762 logs.go:123] Gathering logs for kube-scheduler [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3] ...
	I1221 20:21:49.279572  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:49.345265  229762 logs.go:123] Gathering logs for storage-provisioner [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77] ...
	I1221 20:21:49.345297  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:49.378951  229762 logs.go:123] Gathering logs for CRI-O ...
	I1221 20:21:49.378983  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 20:21:49.437873  229762 logs.go:123] Gathering logs for container status ...
	I1221 20:21:49.437905  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 20:21:49.476896  229762 logs.go:123] Gathering logs for kubelet ...
	I1221 20:21:49.476928  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 20:21:52.072314  229762 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:21:52.072718  229762 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1221 20:21:52.072764  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 20:21:52.072815  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1221 20:21:52.117111  229762 cri.go:96] found id: "834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:52.117133  229762 cri.go:96] found id: ""
	I1221 20:21:52.117143  229762 logs.go:282] 1 containers: [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7]
	I1221 20:21:52.117194  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.121423  229762 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 20:21:52.121496  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1221 20:21:52.163120  229762 cri.go:96] found id: "1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:52.163136  229762 cri.go:96] found id: ""
	I1221 20:21:52.163143  229762 logs.go:282] 1 containers: [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234]
	I1221 20:21:52.163189  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.167010  229762 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 20:21:52.167069  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1221 20:21:52.217969  229762 cri.go:96] found id: ""
	I1221 20:21:52.218003  229762 logs.go:282] 0 containers: []
	W1221 20:21:52.218012  229762 logs.go:284] No container was found matching "coredns"
	I1221 20:21:52.218020  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 20:21:52.218076  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1221 20:21:52.260286  229762 cri.go:96] found id: "0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:52.260308  229762 cri.go:96] found id: "03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:52.260314  229762 cri.go:96] found id: ""
	I1221 20:21:52.260323  229762 logs.go:282] 2 containers: [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226]
	I1221 20:21:52.260375  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.264262  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.267824  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 20:21:52.267883  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1221 20:21:52.304090  229762 cri.go:96] found id: "67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:52.304107  229762 cri.go:96] found id: ""
	I1221 20:21:52.304114  229762 logs.go:282] 1 containers: [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8]
	I1221 20:21:52.304148  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.307944  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 20:21:52.308004  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1221 20:21:52.345455  229762 cri.go:96] found id: "6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:52.345475  229762 cri.go:96] found id: ""
	I1221 20:21:52.345484  229762 logs.go:282] 1 containers: [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae]
	I1221 20:21:52.345537  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.349727  229762 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 20:21:52.349792  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1221 20:21:52.388509  229762 cri.go:96] found id: "c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:52.388529  229762 cri.go:96] found id: ""
	I1221 20:21:52.388538  229762 logs.go:282] 1 containers: [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b]
	I1221 20:21:52.388575  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.392182  229762 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1221 20:21:52.392266  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1221 20:21:52.429151  229762 cri.go:96] found id: "a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:52.429173  229762 cri.go:96] found id: ""
	I1221 20:21:52.429183  229762 logs.go:282] 1 containers: [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77]
	I1221 20:21:52.429263  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:52.433019  229762 logs.go:123] Gathering logs for kube-proxy [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8] ...
	I1221 20:21:52.433036  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:52.482213  229762 logs.go:123] Gathering logs for container status ...
	I1221 20:21:52.482257  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 20:21:52.524687  229762 logs.go:123] Gathering logs for dmesg ...
	I1221 20:21:52.524714  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 20:21:52.543421  229762 logs.go:123] Gathering logs for etcd [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234] ...
	I1221 20:21:52.543460  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:52.586003  229762 logs.go:123] Gathering logs for kube-scheduler [03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226] ...
	I1221 20:21:52.586042  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:52.630589  229762 logs.go:123] Gathering logs for kube-controller-manager [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae] ...
	I1221 20:21:52.630622  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:52.671410  229762 logs.go:123] Gathering logs for kindnet [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b] ...
	I1221 20:21:52.671436  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:52.713424  229762 logs.go:123] Gathering logs for storage-provisioner [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77] ...
	I1221 20:21:52.713458  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:52.749137  229762 logs.go:123] Gathering logs for CRI-O ...
	I1221 20:21:52.749167  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 20:21:52.808426  229762 logs.go:123] Gathering logs for kubelet ...
	I1221 20:21:52.808462  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 20:21:52.912774  229762 logs.go:123] Gathering logs for describe nodes ...
	I1221 20:21:52.912804  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1221 20:21:52.976316  229762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1221 20:21:52.976340  229762 logs.go:123] Gathering logs for kube-apiserver [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7] ...
	I1221 20:21:52.976358  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:53.023899  229762 logs.go:123] Gathering logs for kube-scheduler [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3] ...
	I1221 20:21:53.023924  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:52.284037  263272 out.go:252] * Updating the running docker "pause-592353" container ...
	I1221 20:21:52.284073  263272 machine.go:94] provisionDockerMachine start ...
	I1221 20:21:52.284168  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.303301  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:52.303568  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:52.303583  263272 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:21:52.443186  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-592353
	
	I1221 20:21:52.443269  263272 ubuntu.go:182] provisioning hostname "pause-592353"
	I1221 20:21:52.443345  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.467065  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:52.467369  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:52.467389  263272 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-592353 && echo "pause-592353" | sudo tee /etc/hostname
	I1221 20:21:52.623449  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-592353
	
	I1221 20:21:52.623539  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.645017  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:52.645338  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:52.645367  263272 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-592353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-592353/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-592353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:21:52.785678  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:21:52.785708  263272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:21:52.785737  263272 ubuntu.go:190] setting up certificates
	I1221 20:21:52.785748  263272 provision.go:84] configureAuth start
	I1221 20:21:52.785805  263272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-592353
	I1221 20:21:52.806027  263272 provision.go:143] copyHostCerts
	I1221 20:21:52.806106  263272 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:21:52.806118  263272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:21:52.806185  263272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:21:52.806340  263272 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:21:52.806353  263272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:21:52.806383  263272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:21:52.806447  263272 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:21:52.806455  263272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:21:52.806478  263272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:21:52.806528  263272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.pause-592353 san=[127.0.0.1 192.168.85.2 localhost minikube pause-592353]
	I1221 20:21:52.839828  263272 provision.go:177] copyRemoteCerts
	I1221 20:21:52.839898  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:21:52.839944  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:52.858118  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:52.958044  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 20:21:52.979662  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:21:52.999500  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1221 20:21:53.017824  263272 provision.go:87] duration metric: took 232.063395ms to configureAuth
	I1221 20:21:53.017849  263272 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:21:53.018053  263272 config.go:182] Loaded profile config "pause-592353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:21:53.018166  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.037454  263272 main.go:144] libmachine: Using SSH client type: native
	I1221 20:21:53.037762  263272 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1221 20:21:53.037784  263272 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:21:53.353880  263272 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:21:53.353906  263272 machine.go:97] duration metric: took 1.069826019s to provisionDockerMachine
	I1221 20:21:53.353917  263272 start.go:293] postStartSetup for "pause-592353" (driver="docker")
	I1221 20:21:53.353926  263272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:21:53.353983  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:21:53.354028  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.371991  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.469491  263272 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:21:53.472887  263272 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:21:53.472919  263272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:21:53.472933  263272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:21:53.472996  263272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:21:53.473092  263272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:21:53.473218  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:21:53.481185  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:21:53.499436  263272 start.go:296] duration metric: took 145.503948ms for postStartSetup
	I1221 20:21:53.499507  263272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:21:53.499552  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.517632  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.611145  263272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:21:53.615603  263272 fix.go:56] duration metric: took 1.354559475s for fixHost
	I1221 20:21:53.615631  263272 start.go:83] releasing machines lock for "pause-592353", held for 1.354606217s
	I1221 20:21:53.615701  263272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-592353
	I1221 20:21:53.633939  263272 ssh_runner.go:195] Run: cat /version.json
	I1221 20:21:53.633995  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.634016  263272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:21:53.634087  263272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-592353
	I1221 20:21:53.654786  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.655731  263272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/pause-592353/id_rsa Username:docker}
	I1221 20:21:53.806036  263272 ssh_runner.go:195] Run: systemctl --version
	I1221 20:21:53.812583  263272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:21:53.848736  263272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:21:53.853585  263272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:21:53.853648  263272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:21:53.861257  263272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:21:53.861280  263272 start.go:496] detecting cgroup driver to use...
	I1221 20:21:53.861312  263272 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:21:53.861357  263272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:21:53.875038  263272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:21:53.886705  263272 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:21:53.886756  263272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:21:53.900558  263272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:21:53.911993  263272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:21:54.020150  263272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:21:54.123634  263272 docker.go:234] disabling docker service ...
	I1221 20:21:54.123705  263272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:21:54.138199  263272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:21:54.150458  263272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:21:54.255333  263272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:21:54.360766  263272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:21:54.373048  263272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:21:54.387057  263272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:21:54.387127  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.395367  263272 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:21:54.395418  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.403743  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.411953  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.420621  263272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:21:54.428539  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.437007  263272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.444982  263272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:21:54.453698  263272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:21:54.461409  263272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:21:54.468967  263272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:54.575453  263272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:21:54.742529  263272 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:21:54.742588  263272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:21:54.746480  263272 start.go:564] Will wait 60s for crictl version
	I1221 20:21:54.746537  263272 ssh_runner.go:195] Run: which crictl
	I1221 20:21:54.749998  263272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:21:54.774440  263272 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:21:54.774528  263272 ssh_runner.go:195] Run: crio --version
	I1221 20:21:54.801705  263272 ssh_runner.go:195] Run: crio --version
	I1221 20:21:54.830771  263272 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:21:52.573556  260566 out.go:252]   - Generating certificates and keys ...
	I1221 20:21:52.573652  260566 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 20:21:52.573744  260566 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 20:21:53.016740  260566 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 20:21:53.328944  260566 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 20:21:53.503553  260566 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 20:21:53.615670  260566 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 20:21:54.077499  260566 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 20:21:54.077703  260566 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-149976 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1221 20:21:54.142126  260566 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 20:21:54.142341  260566 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-149976 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1221 20:21:54.435472  260566 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 20:21:54.572710  260566 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 20:21:54.725512  260566 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 20:21:54.725611  260566 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 20:21:54.801275  260566 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 20:21:54.832089  263272 cli_runner.go:164] Run: docker network inspect pause-592353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:21:54.852701  263272 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1221 20:21:54.856903  263272 kubeadm.go:884] updating cluster {Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:21:54.857054  263272 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:21:54.857092  263272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:54.890869  263272 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:54.890889  263272 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:21:54.890928  263272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:21:54.916446  263272 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:21:54.916466  263272 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:21:54.916473  263272 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1221 20:21:54.916564  263272 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-592353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:21:54.916623  263272 ssh_runner.go:195] Run: crio config
	I1221 20:21:54.962610  263272 cni.go:84] Creating CNI manager for ""
	I1221 20:21:54.962631  263272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:21:54.962645  263272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:21:54.962673  263272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-592353 NodeName:pause-592353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:21:54.962819  263272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-592353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:21:54.962886  263272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:21:54.971076  263272 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:21:54.971145  263272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:21:54.978510  263272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1221 20:21:54.990874  263272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:21:55.004192  263272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1221 20:21:55.015957  263272 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:21:55.019623  263272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:55.125013  263272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:21:55.137282  263272 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353 for IP: 192.168.85.2
	I1221 20:21:55.137305  263272 certs.go:195] generating shared ca certs ...
	I1221 20:21:55.137320  263272 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:55.137480  263272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:21:55.137562  263272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:21:55.137580  263272 certs.go:257] generating profile certs ...
	I1221 20:21:55.137679  263272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.key
	I1221 20:21:55.137751  263272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/apiserver.key.e15d9711
	I1221 20:21:55.137805  263272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/proxy-client.key
	I1221 20:21:55.137947  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:21:55.137997  263272 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:21:55.138012  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:21:55.138047  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:21:55.138083  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:21:55.138117  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:21:55.138170  263272 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:21:55.138741  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:21:55.156542  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:21:55.173124  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:21:55.189521  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:21:55.206933  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1221 20:21:55.224990  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:21:55.242013  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:21:55.258462  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:21:55.275238  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:21:55.291268  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:21:55.307456  263272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:21:55.322860  263272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:21:55.334485  263272 ssh_runner.go:195] Run: openssl version
	I1221 20:21:55.340313  263272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.346838  263272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:21:55.353446  263272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.357006  263272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.357074  263272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:21:55.390502  263272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:21:55.397782  263272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.404974  263272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:21:55.412210  263272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.416265  263272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.416312  263272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:21:55.450379  263272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:21:55.457745  263272 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.465054  263272 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:21:55.472164  263272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.475893  263272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.475946  263272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:21:55.512592  263272 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:21:55.520636  263272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:21:55.525059  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:21:55.559306  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:21:55.593589  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:21:55.631571  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:21:55.671334  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:21:55.709616  263272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:21:55.746910  263272 kubeadm.go:401] StartCluster: {Name:pause-592353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-592353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:21:55.747028  263272 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:21:55.747083  263272 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:21:55.777766  263272 cri.go:96] found id: "d7ddab942fcf30350719c79fe4e4da1c0344baa599e6f163ace8f40cf51716a7"
	I1221 20:21:55.777793  263272 cri.go:96] found id: "8d5f874a6621042cba99fbce56842b3962ec673f9bbedcd6afd28d968aedbc30"
	I1221 20:21:55.777799  263272 cri.go:96] found id: "42a6f973de3c4cd2665eefb628f1948c23aca56e3f9d1687e6a7f96eb4cbd6b8"
	I1221 20:21:55.777804  263272 cri.go:96] found id: "5231ce47f2d8f12d2622ea04f309e487bd672aaae1b69080127c64beafdec65d"
	I1221 20:21:55.777809  263272 cri.go:96] found id: "1a16fa514a1ef021231144a2510542320893d892df6c756403ccd3f12a41fb0c"
	I1221 20:21:55.777814  263272 cri.go:96] found id: "c3d9d9135faab4bd815eb6556f77257cab04249a3949c66ff3a7c8a7158a602c"
	I1221 20:21:55.777819  263272 cri.go:96] found id: "201d5aae363cad0f1dc034c2f10bf6a04bf4e952b700716cb2f85ef85d99e133"
	I1221 20:21:55.777823  263272 cri.go:96] found id: ""
	I1221 20:21:55.777877  263272 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:21:55.790212  263272 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:21:55Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:21:55.790307  263272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:21:55.798868  263272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:21:55.798889  263272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:21:55.798933  263272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:21:55.806573  263272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:21:55.807493  263272 kubeconfig.go:125] found "pause-592353" server: "https://192.168.85.2:8443"
	I1221 20:21:55.808773  263272 kapi.go:59] client config for pause-592353: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.key", CAFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 20:21:55.809321  263272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1221 20:21:55.809345  263272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1221 20:21:55.809352  263272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1221 20:21:55.809361  263272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1221 20:21:55.809367  263272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1221 20:21:55.809774  263272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:21:55.817042  263272 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1221 20:21:55.817074  263272 kubeadm.go:602] duration metric: took 18.17799ms to restartPrimaryControlPlane
	I1221 20:21:55.817084  263272 kubeadm.go:403] duration metric: took 70.183735ms to StartCluster
	I1221 20:21:55.817108  263272 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:55.817174  263272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:21:55.818499  263272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:21:55.818776  263272 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:21:55.818837  263272 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:21:55.819009  263272 config.go:182] Loaded profile config "pause-592353": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:21:55.824535  263272 out.go:179] * Verifying Kubernetes components...
	I1221 20:21:55.824538  263272 out.go:179] * Enabled addons: 
	I1221 20:21:55.797555  260566 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 20:21:56.271892  260566 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 20:21:56.315623  260566 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 20:21:56.510955  260566 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 20:21:56.511583  260566 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 20:21:56.515075  260566 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 20:21:55.826572  263272 addons.go:530] duration metric: took 7.742462ms for enable addons: enabled=[]
	I1221 20:21:55.826580  263272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:21:55.950216  263272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:21:55.965888  263272 node_ready.go:35] waiting up to 6m0s for node "pause-592353" to be "Ready" ...
	I1221 20:21:55.974838  263272 node_ready.go:49] node "pause-592353" is "Ready"
	I1221 20:21:55.974868  263272 node_ready.go:38] duration metric: took 8.947576ms for node "pause-592353" to be "Ready" ...
	I1221 20:21:55.974885  263272 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:21:55.974934  263272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:21:55.987247  263272 api_server.go:72] duration metric: took 168.434824ms to wait for apiserver process to appear ...
	I1221 20:21:55.987272  263272 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:21:55.987292  263272 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1221 20:21:55.993009  263272 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1221 20:21:55.993955  263272 api_server.go:141] control plane version: v1.34.3
	I1221 20:21:55.993980  263272 api_server.go:131] duration metric: took 6.700305ms to wait for apiserver health ...
	I1221 20:21:55.993989  263272 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:21:55.996763  263272 system_pods.go:59] 7 kube-system pods found
	I1221 20:21:55.996791  263272 system_pods.go:61] "coredns-66bc5c9577-vfzl5" [3062e359-50b1-472d-bc8c-41564481dd9c] Running
	I1221 20:21:55.996799  263272 system_pods.go:61] "etcd-pause-592353" [4a6b784d-8cae-48ee-ad04-22b61eca649a] Running
	I1221 20:21:55.996805  263272 system_pods.go:61] "kindnet-fz2nh" [9a6c131b-77a7-4697-aa59-1106a4d885ac] Running
	I1221 20:21:55.996813  263272 system_pods.go:61] "kube-apiserver-pause-592353" [9d194f3b-8417-42a4-9ecd-5071a3d4d590] Running
	I1221 20:21:55.996828  263272 system_pods.go:61] "kube-controller-manager-pause-592353" [e364cd8e-1b00-4aa6-95d9-062063622a77] Running
	I1221 20:21:55.996835  263272 system_pods.go:61] "kube-proxy-j8r2s" [73638941-53c1-4078-aea3-e51da00fb427] Running
	I1221 20:21:55.996843  263272 system_pods.go:61] "kube-scheduler-pause-592353" [2d2bac2c-eeca-4e21-9e5b-46e1b126d279] Running
	I1221 20:21:55.996851  263272 system_pods.go:74] duration metric: took 2.854911ms to wait for pod list to return data ...
	I1221 20:21:55.996864  263272 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:21:55.999021  263272 default_sa.go:45] found service account: "default"
	I1221 20:21:55.999047  263272 default_sa.go:55] duration metric: took 2.173828ms for default service account to be created ...
	I1221 20:21:55.999059  263272 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:21:56.001898  263272 system_pods.go:86] 7 kube-system pods found
	I1221 20:21:56.001922  263272 system_pods.go:89] "coredns-66bc5c9577-vfzl5" [3062e359-50b1-472d-bc8c-41564481dd9c] Running
	I1221 20:21:56.001931  263272 system_pods.go:89] "etcd-pause-592353" [4a6b784d-8cae-48ee-ad04-22b61eca649a] Running
	I1221 20:21:56.001937  263272 system_pods.go:89] "kindnet-fz2nh" [9a6c131b-77a7-4697-aa59-1106a4d885ac] Running
	I1221 20:21:56.001954  263272 system_pods.go:89] "kube-apiserver-pause-592353" [9d194f3b-8417-42a4-9ecd-5071a3d4d590] Running
	I1221 20:21:56.001964  263272 system_pods.go:89] "kube-controller-manager-pause-592353" [e364cd8e-1b00-4aa6-95d9-062063622a77] Running
	I1221 20:21:56.001971  263272 system_pods.go:89] "kube-proxy-j8r2s" [73638941-53c1-4078-aea3-e51da00fb427] Running
	I1221 20:21:56.001977  263272 system_pods.go:89] "kube-scheduler-pause-592353" [2d2bac2c-eeca-4e21-9e5b-46e1b126d279] Running
	I1221 20:21:56.001986  263272 system_pods.go:126] duration metric: took 2.919437ms to wait for k8s-apps to be running ...
	I1221 20:21:56.001994  263272 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:21:56.002043  263272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:21:56.015337  263272 system_svc.go:56] duration metric: took 13.335334ms WaitForService to wait for kubelet
	I1221 20:21:56.015365  263272 kubeadm.go:587] duration metric: took 196.556893ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:21:56.015385  263272 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:21:56.017647  263272 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:21:56.017668  263272 node_conditions.go:123] node cpu capacity is 8
	I1221 20:21:56.017680  263272 node_conditions.go:105] duration metric: took 2.289962ms to run NodePressure ...
	I1221 20:21:56.017691  263272 start.go:242] waiting for startup goroutines ...
	I1221 20:21:56.017698  263272 start.go:247] waiting for cluster config update ...
	I1221 20:21:56.017704  263272 start.go:256] writing updated cluster config ...
	I1221 20:21:56.017970  263272 ssh_runner.go:195] Run: rm -f paused
	I1221 20:21:56.021879  263272 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:21:56.022716  263272 kapi.go:59] client config for pause-592353: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.crt", KeyFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/profiles/pause-592353/client.key", CAFile:"/home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2867280), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 20:21:56.025336  263272 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfzl5" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.029679  263272 pod_ready.go:94] pod "coredns-66bc5c9577-vfzl5" is "Ready"
	I1221 20:21:56.029701  263272 pod_ready.go:86] duration metric: took 4.343266ms for pod "coredns-66bc5c9577-vfzl5" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.031630  263272 pod_ready.go:83] waiting for pod "etcd-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.035192  263272 pod_ready.go:94] pod "etcd-pause-592353" is "Ready"
	I1221 20:21:56.035211  263272 pod_ready.go:86] duration metric: took 3.561411ms for pod "etcd-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.037299  263272 pod_ready.go:83] waiting for pod "kube-apiserver-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.040876  263272 pod_ready.go:94] pod "kube-apiserver-pause-592353" is "Ready"
	I1221 20:21:56.040897  263272 pod_ready.go:86] duration metric: took 3.577184ms for pod "kube-apiserver-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.042677  263272 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.427038  263272 pod_ready.go:94] pod "kube-controller-manager-pause-592353" is "Ready"
	I1221 20:21:56.427067  263272 pod_ready.go:86] duration metric: took 384.371534ms for pod "kube-controller-manager-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:56.625790  263272 pod_ready.go:83] waiting for pod "kube-proxy-j8r2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.026359  263272 pod_ready.go:94] pod "kube-proxy-j8r2s" is "Ready"
	I1221 20:21:57.026386  263272 pod_ready.go:86] duration metric: took 400.573697ms for pod "kube-proxy-j8r2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.226277  263272 pod_ready.go:83] waiting for pod "kube-scheduler-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.626613  263272 pod_ready.go:94] pod "kube-scheduler-pause-592353" is "Ready"
	I1221 20:21:57.626646  263272 pod_ready.go:86] duration metric: took 400.340853ms for pod "kube-scheduler-pause-592353" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:21:57.626663  263272 pod_ready.go:40] duration metric: took 1.604756714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:21:57.679471  263272 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:21:57.684341  263272 out.go:179] * Done! kubectl is now configured to use "pause-592353" cluster and "default" namespace by default
	I1221 20:21:55.599297  229762 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:21:55.599714  229762 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1221 20:21:55.599759  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 20:21:55.599806  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1221 20:21:55.634826  229762 cri.go:96] found id: "834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:55.634848  229762 cri.go:96] found id: ""
	I1221 20:21:55.634858  229762 logs.go:282] 1 containers: [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7]
	I1221 20:21:55.634908  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.638665  229762 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 20:21:55.638724  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1221 20:21:55.673795  229762 cri.go:96] found id: "1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:55.673813  229762 cri.go:96] found id: ""
	I1221 20:21:55.673821  229762 logs.go:282] 1 containers: [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234]
	I1221 20:21:55.673870  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.677600  229762 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 20:21:55.677653  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1221 20:21:55.712159  229762 cri.go:96] found id: ""
	I1221 20:21:55.712179  229762 logs.go:282] 0 containers: []
	W1221 20:21:55.712187  229762 logs.go:284] No container was found matching "coredns"
	I1221 20:21:55.712193  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 20:21:55.712254  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1221 20:21:55.747249  229762 cri.go:96] found id: "0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:55.747268  229762 cri.go:96] found id: "03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:55.747274  229762 cri.go:96] found id: ""
	I1221 20:21:55.747284  229762 logs.go:282] 2 containers: [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226]
	I1221 20:21:55.747330  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.751172  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.754440  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 20:21:55.754502  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1221 20:21:55.792146  229762 cri.go:96] found id: "67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:55.792165  229762 cri.go:96] found id: ""
	I1221 20:21:55.792174  229762 logs.go:282] 1 containers: [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8]
	I1221 20:21:55.792248  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.795836  229762 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 20:21:55.795898  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1221 20:21:55.832213  229762 cri.go:96] found id: "6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:55.832260  229762 cri.go:96] found id: ""
	I1221 20:21:55.832269  229762 logs.go:282] 1 containers: [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae]
	I1221 20:21:55.832317  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.836127  229762 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 20:21:55.836185  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1221 20:21:55.878346  229762 cri.go:96] found id: "c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:55.878369  229762 cri.go:96] found id: ""
	I1221 20:21:55.878378  229762 logs.go:282] 1 containers: [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b]
	I1221 20:21:55.878432  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.881986  229762 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1221 20:21:55.882056  229762 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1221 20:21:55.916788  229762 cri.go:96] found id: "a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:55.916809  229762 cri.go:96] found id: ""
	I1221 20:21:55.916815  229762 logs.go:282] 1 containers: [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77]
	I1221 20:21:55.916865  229762 ssh_runner.go:195] Run: which crictl
	I1221 20:21:55.920383  229762 logs.go:123] Gathering logs for kube-controller-manager [6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae] ...
	I1221 20:21:55.920406  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e001913ae544d04d594d0a79bb9f0f46c930e0f1902cc66deb4e2b6ab44f7ae"
	I1221 20:21:55.962110  229762 logs.go:123] Gathering logs for storage-provisioner [a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77] ...
	I1221 20:21:55.962137  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b66d116b9cb34b809e1a3c7dc932f68ccd34cecff81f2f07c083cdd2747a77"
	I1221 20:21:56.005217  229762 logs.go:123] Gathering logs for dmesg ...
	I1221 20:21:56.005266  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 20:21:56.022025  229762 logs.go:123] Gathering logs for kube-proxy [67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8] ...
	I1221 20:21:56.022049  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67264d2ac0f217b4ff38dceecec35f977cd7caeed9ab2e9d07615c5b609dd3c8"
	I1221 20:21:56.069736  229762 logs.go:123] Gathering logs for kindnet [c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b] ...
	I1221 20:21:56.069767  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c214f922314ea7fff8710ac15d359e37d8ed0a7389e1b6c75e90facc26d1881b"
	I1221 20:21:56.109154  229762 logs.go:123] Gathering logs for CRI-O ...
	I1221 20:21:56.109190  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 20:21:56.164484  229762 logs.go:123] Gathering logs for container status ...
	I1221 20:21:56.164516  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 20:21:56.206018  229762 logs.go:123] Gathering logs for kubelet ...
	I1221 20:21:56.206043  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 20:21:56.307204  229762 logs.go:123] Gathering logs for describe nodes ...
	I1221 20:21:56.307249  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1221 20:21:56.367213  229762 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1221 20:21:56.367249  229762 logs.go:123] Gathering logs for kube-apiserver [834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7] ...
	I1221 20:21:56.367265  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 834de8ba321d6bd5b28785b5d9f7de7dca82d1e63e448f36b3507fc90304c4a7"
	I1221 20:21:56.403958  229762 logs.go:123] Gathering logs for etcd [1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234] ...
	I1221 20:21:56.403986  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1878ae96856edc4d46326f10a1d534e8b346096f2b45458b0df91501fae6c234"
	I1221 20:21:56.445457  229762 logs.go:123] Gathering logs for kube-scheduler [0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3] ...
	I1221 20:21:56.445486  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc7d7fc5f58fdc657018a4947fc76699ef46b7a4559658f741d847b8978fbc3"
	I1221 20:21:56.513482  229762 logs.go:123] Gathering logs for kube-scheduler [03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226] ...
	I1221 20:21:56.513515  229762 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ee8f8ff9015e5214092f87458a1417e2a07e0040434fc9c840bae2be1cf226"
	I1221 20:21:56.516447  260566 out.go:252]   - Booting up control plane ...
	I1221 20:21:56.516583  260566 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 20:21:56.516678  260566 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 20:21:56.517208  260566 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 20:21:56.530537  260566 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 20:21:56.530640  260566 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 20:21:56.537379  260566 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 20:21:56.537630  260566 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 20:21:56.537720  260566 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 20:21:56.642333  260566 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 20:21:56.642451  260566 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 20:21:57.143804  260566 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.621216ms
	I1221 20:21:57.146953  260566 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 20:21:57.147070  260566 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1221 20:21:57.147202  260566 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 20:21:57.147316  260566 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 20:21:58.682311  260566 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.535177376s
	I1221 20:21:58.820799  260566 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.673787214s
	I1221 20:22:00.648372  260566 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501332865s
	I1221 20:22:00.665973  260566 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 20:22:00.675731  260566 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 20:22:00.684116  260566 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 20:22:00.684418  260566 kubeadm.go:319] [mark-control-plane] Marking the node auto-149976 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 20:22:00.692417  260566 kubeadm.go:319] [bootstrap-token] Using token: 6lrr53.nk55ynt6lxsagguw
	
	
	==> CRI-O <==
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.659409342Z" level=info msg="RDT not available in the host system"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.659425474Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.660217849Z" level=info msg="Conmon does support the --sync option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.660246661Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.660263997Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.661162831Z" level=info msg="Conmon does support the --sync option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.661182857Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.664815633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.664838374Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.665576463Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.665977862Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.666018075Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.736870319Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-vfzl5 Namespace:kube-system ID:64e76d1304030a76c17da55d05a94ce0677375453ff6fa163c1d84abf9210421 UID:3062e359-50b1-472d-bc8c-41564481dd9c NetNS:/var/run/netns/0c21b218-ac64-4707-9dca-2136e791543f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006263d0}] Aliases:map[]}"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.737074537Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-vfzl5 for CNI network kindnet (type=ptp)"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738150686Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738244682Z" level=info msg="Starting seccomp notifier watcher"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738377529Z" level=info msg="Create NRI interface"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738852629Z" level=info msg="built-in NRI default validator is disabled"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738878035Z" level=info msg="runtime interface created"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738891907Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738900177Z" level=info msg="runtime interface starting up..."
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738908144Z" level=info msg="starting plugins..."
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.738921323Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 21 20:21:54 pause-592353 crio[2235]: time="2025-12-21T20:21:54.739278278Z" level=info msg="No systemd watchdog enabled"
	Dec 21 20:21:54 pause-592353 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d7ddab942fcf3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     13 seconds ago      Running             coredns                   0                   64e76d1304030       coredns-66bc5c9577-vfzl5               kube-system
	8d5f874a66210       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   24 seconds ago      Running             kindnet-cni               0                   b1b209ece555e       kindnet-fz2nh                          kube-system
	42a6f973de3c4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     26 seconds ago      Running             kube-proxy                0                   c0f0bd72ff2ea       kube-proxy-j8r2s                       kube-system
	5231ce47f2d8f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     35 seconds ago      Running             kube-scheduler            0                   157a40dbeee5a       kube-scheduler-pause-592353            kube-system
	1a16fa514a1ef       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     35 seconds ago      Running             etcd                      0                   3c471a4586535       etcd-pause-592353                      kube-system
	c3d9d9135faab       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     35 seconds ago      Running             kube-apiserver            0                   3a85cc0609c9c       kube-apiserver-pause-592353            kube-system
	201d5aae363ca       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     35 seconds ago      Running             kube-controller-manager   0                   c7cee7a4e2e49       kube-controller-manager-pause-592353   kube-system
	
	
	==> coredns [d7ddab942fcf30350719c79fe4e4da1c0344baa599e6f163ace8f40cf51716a7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50619 - 8314 "HINFO IN 4954424143234619073.6872614671334231471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.459529952s
	
	
	==> describe nodes <==
	Name:               pause-592353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-592353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=pause-592353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_21_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:21:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-592353
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:21:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:21:48 +0000   Sun, 21 Dec 2025 20:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-592353
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                4107c20a-45c6-43e4-840d-321036df5d2f
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-vfzl5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-592353                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-fz2nh                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-592353             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-592353    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-j8r2s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-592353             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-592353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-592353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-592353 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-592353 event: Registered Node pause-592353 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-592353 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.085350] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025061] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.894686] kauditd_printk_skb: 47 callbacks suppressed
	[Dec21 19:48] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.000151] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023871] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023881] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023899] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +1.023888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +2.047760] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +4.031573] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[  +8.255179] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 19:49] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[ +32.252695] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	
	
	==> etcd [1a16fa514a1ef021231144a2510542320893d892df6c756403ccd3f12a41fb0c] <==
	{"level":"warn","ts":"2025-12-21T20:21:27.018918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.026718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.035260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.042692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.048813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.055254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.065940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.071873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.077877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.085249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.098319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.105618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.112783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.120944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.127184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.133652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.139853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.146793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.153162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.159347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.166610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.178634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.192331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:21:27.241649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56310","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-21T20:21:44.799209Z","caller":"traceutil/trace.go:172","msg":"trace[1504529588] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"189.702781ms","start":"2025-12-21T20:21:44.609491Z","end":"2025-12-21T20:21:44.799194Z","steps":["trace[1504529588] 'process raft request'  (duration: 189.573752ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:22:02 up  1:04,  0 user,  load average: 2.49, 2.92, 2.10
	Linux pause-592353 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d5f874a6621042cba99fbce56842b3962ec673f9bbedcd6afd28d968aedbc30] <==
	I1221 20:21:37.933279       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:21:37.933788       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1221 20:21:37.933943       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:21:37.933976       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:21:37.934005       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:21:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:21:38.135247       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:21:38.135278       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:21:38.135291       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:21:38.136147       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:21:38.510679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:21:38.510713       1 metrics.go:72] Registering metrics
	I1221 20:21:38.510815       1 controller.go:711] "Syncing nftables rules"
	I1221 20:21:48.136219       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:21:48.136331       1 main.go:301] handling current node
	I1221 20:21:58.141324       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:21:58.141366       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c3d9d9135faab4bd815eb6556f77257cab04249a3949c66ff3a7c8a7158a602c] <==
	I1221 20:21:27.787645       1 policy_source.go:240] refreshing policies
	E1221 20:21:27.807652       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1221 20:21:27.855050       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:21:27.857592       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:27.857718       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1221 20:21:27.864577       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:27.864794       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 20:21:27.953551       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:21:28.657604       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1221 20:21:28.661015       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1221 20:21:28.661035       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:21:29.097384       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:21:29.130913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:21:29.263618       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 20:21:29.269514       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1221 20:21:29.270564       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:21:29.275119       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:21:29.675553       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:21:30.085034       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:21:30.093378       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 20:21:30.101954       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:21:35.378387       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:21:35.678948       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:35.682421       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:21:35.727867       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [201d5aae363cad0f1dc034c2f10bf6a04bf4e952b700716cb2f85ef85d99e133] <==
	I1221 20:21:34.675217       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1221 20:21:34.676299       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1221 20:21:34.676409       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1221 20:21:34.676509       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1221 20:21:34.676564       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1221 20:21:34.676578       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:21:34.677092       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1221 20:21:34.677195       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1221 20:21:34.678063       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1221 20:21:34.678094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1221 20:21:34.678117       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1221 20:21:34.678726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1221 20:21:34.679408       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 20:21:34.681009       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 20:21:34.681059       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 20:21:34.681100       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 20:21:34.681113       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 20:21:34.681130       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 20:21:34.683330       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1221 20:21:34.687136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:21:34.687276       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-592353" podCIDRs=["10.244.0.0/24"]
	I1221 20:21:34.690353       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 20:21:34.697694       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 20:21:34.704023       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:21:49.678170       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [42a6f973de3c4cd2665eefb628f1948c23aca56e3f9d1687e6a7f96eb4cbd6b8] <==
	I1221 20:21:36.151026       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:21:36.237684       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:21:36.338366       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:21:36.338410       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1221 20:21:36.338548       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:21:36.360556       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:21:36.360603       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:21:36.366793       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:21:36.367280       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:21:36.367305       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:21:36.368869       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:21:36.368892       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:21:36.368942       1 config.go:200] "Starting service config controller"
	I1221 20:21:36.368958       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:21:36.368943       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:21:36.368976       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:21:36.369034       1 config.go:309] "Starting node config controller"
	I1221 20:21:36.369050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:21:36.369059       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:21:36.469386       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:21:36.469417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:21:36.469431       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5231ce47f2d8f12d2622ea04f309e487bd672aaae1b69080127c64beafdec65d] <==
	I1221 20:21:28.128643       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:21:28.130468       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:21:28.130509       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:21:28.130845       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:21:28.130916       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1221 20:21:28.131797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1221 20:21:28.132275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1221 20:21:28.132545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 20:21:28.132673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 20:21:28.134107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 20:21:28.134214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 20:21:28.134276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1221 20:21:28.134410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 20:21:28.134463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 20:21:28.134512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1221 20:21:28.134678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 20:21:28.134697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 20:21:28.134730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1221 20:21:28.134759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 20:21:28.134834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 20:21:28.134908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1221 20:21:28.134909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1221 20:21:28.134931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 20:21:28.135063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1221 20:21:29.230601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:21:30 pause-592353 kubelet[1330]: I1221 20:21:30.988855    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-592353" podStartSLOduration=2.988173539 podStartE2EDuration="2.988173539s" podCreationTimestamp="2025-12-21 20:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:30.974485624 +0000 UTC m=+1.129328529" watchObservedRunningTime="2025-12-21 20:21:30.988173539 +0000 UTC m=+1.143016438"
	Dec 21 20:21:31 pause-592353 kubelet[1330]: I1221 20:21:31.000347    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-592353" podStartSLOduration=3.000325487 podStartE2EDuration="3.000325487s" podCreationTimestamp="2025-12-21 20:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:30.989137479 +0000 UTC m=+1.143980380" watchObservedRunningTime="2025-12-21 20:21:31.000325487 +0000 UTC m=+1.155168382"
	Dec 21 20:21:31 pause-592353 kubelet[1330]: I1221 20:21:31.009805    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-592353" podStartSLOduration=1.009784429 podStartE2EDuration="1.009784429s" podCreationTimestamp="2025-12-21 20:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:31.000764121 +0000 UTC m=+1.155607020" watchObservedRunningTime="2025-12-21 20:21:31.009784429 +0000 UTC m=+1.164627331"
	Dec 21 20:21:31 pause-592353 kubelet[1330]: I1221 20:21:31.009937    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-592353" podStartSLOduration=1.009929671 podStartE2EDuration="1.009929671s" podCreationTimestamp="2025-12-21 20:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:31.009572568 +0000 UTC m=+1.164415471" watchObservedRunningTime="2025-12-21 20:21:31.009929671 +0000 UTC m=+1.164772574"
	Dec 21 20:21:34 pause-592353 kubelet[1330]: I1221 20:21:34.765636    1330 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 21 20:21:34 pause-592353 kubelet[1330]: I1221 20:21:34.766332    1330 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760535    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73638941-53c1-4078-aea3-e51da00fb427-xtables-lock\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760569    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73638941-53c1-4078-aea3-e51da00fb427-lib-modules\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760586    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73638941-53c1-4078-aea3-e51da00fb427-kube-proxy\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.760608    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr2j9\" (UniqueName: \"kubernetes.io/projected/73638941-53c1-4078-aea3-e51da00fb427-kube-api-access-dr2j9\") pod \"kube-proxy-j8r2s\" (UID: \"73638941-53c1-4078-aea3-e51da00fb427\") " pod="kube-system/kube-proxy-j8r2s"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861652    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbt5\" (UniqueName: \"kubernetes.io/projected/9a6c131b-77a7-4697-aa59-1106a4d885ac-kube-api-access-wrbt5\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861697    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a6c131b-77a7-4697-aa59-1106a4d885ac-xtables-lock\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861768    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a6c131b-77a7-4697-aa59-1106a4d885ac-lib-modules\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:35 pause-592353 kubelet[1330]: I1221 20:21:35.861915    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9a6c131b-77a7-4697-aa59-1106a4d885ac-cni-cfg\") pod \"kindnet-fz2nh\" (UID: \"9a6c131b-77a7-4697-aa59-1106a4d885ac\") " pod="kube-system/kindnet-fz2nh"
	Dec 21 20:21:36 pause-592353 kubelet[1330]: I1221 20:21:36.975935    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j8r2s" podStartSLOduration=1.9759142729999999 podStartE2EDuration="1.975914273s" podCreationTimestamp="2025-12-21 20:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:36.975792243 +0000 UTC m=+7.130635146" watchObservedRunningTime="2025-12-21 20:21:36.975914273 +0000 UTC m=+7.130757177"
	Dec 21 20:21:38 pause-592353 kubelet[1330]: I1221 20:21:38.602056    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fz2nh" podStartSLOduration=1.900073106 podStartE2EDuration="3.602034866s" podCreationTimestamp="2025-12-21 20:21:35 +0000 UTC" firstStartedPulling="2025-12-21 20:21:36.063213689 +0000 UTC m=+6.218056582" lastFinishedPulling="2025-12-21 20:21:37.765175449 +0000 UTC m=+7.920018342" observedRunningTime="2025-12-21 20:21:37.980201923 +0000 UTC m=+8.135044821" watchObservedRunningTime="2025-12-21 20:21:38.602034866 +0000 UTC m=+8.756877767"
	Dec 21 20:21:48 pause-592353 kubelet[1330]: I1221 20:21:48.675443    1330 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 21 20:21:48 pause-592353 kubelet[1330]: I1221 20:21:48.754653    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3062e359-50b1-472d-bc8c-41564481dd9c-config-volume\") pod \"coredns-66bc5c9577-vfzl5\" (UID: \"3062e359-50b1-472d-bc8c-41564481dd9c\") " pod="kube-system/coredns-66bc5c9577-vfzl5"
	Dec 21 20:21:48 pause-592353 kubelet[1330]: I1221 20:21:48.754707    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgh4\" (UniqueName: \"kubernetes.io/projected/3062e359-50b1-472d-bc8c-41564481dd9c-kube-api-access-wjgh4\") pod \"coredns-66bc5c9577-vfzl5\" (UID: \"3062e359-50b1-472d-bc8c-41564481dd9c\") " pod="kube-system/coredns-66bc5c9577-vfzl5"
	Dec 21 20:21:50 pause-592353 kubelet[1330]: I1221 20:21:50.004168    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vfzl5" podStartSLOduration=15.004147268 podStartE2EDuration="15.004147268s" podCreationTimestamp="2025-12-21 20:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:21:50.003862505 +0000 UTC m=+20.158705419" watchObservedRunningTime="2025-12-21 20:21:50.004147268 +0000 UTC m=+20.158990168"
	Dec 21 20:21:54 pause-592353 kubelet[1330]: E1221 20:21:54.946884    1330 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Dec 21 20:21:58 pause-592353 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:21:58 pause-592353 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:21:58 pause-592353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:21:58 pause-592353 systemd[1]: kubelet.service: Consumed 1.213s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-592353 -n pause-592353
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-592353 -n pause-592353: exit status 2 (343.031755ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-592353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-699289 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-699289 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.068705ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:25:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-699289 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-699289 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-699289 describe deploy/metrics-server -n kube-system: exit status 1 (62.723758ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-699289 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-699289
helpers_test.go:244: (dbg) docker inspect old-k8s-version-699289:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3",
	        "Created": "2025-12-21T20:24:47.982475594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:24:48.01963505Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/hosts",
	        "LogPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3-json.log",
	        "Name": "/old-k8s-version-699289",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-699289:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-699289",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3",
	                "LowerDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-699289",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-699289/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-699289",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-699289",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-699289",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8fcad93619a4f1ceac474192dd52a9cc99946b922fbc3b577098a2753f08d78b",
	            "SandboxKey": "/var/run/docker/netns/8fcad93619a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-699289": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "99f5907a172c3f93121569e27574257a2eb119dd81f153d568f418838cd89542",
	                    "EndpointID": "9f5b93176b48bae2b855e4f463cba60871fe33c9f06fcc69ac9bb096ca475adb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0a:a3:81:8b:be:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-699289",
	                        "e26e2b356a85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-699289 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-699289 logs -n 25: (1.109046909s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-149976 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p flannel-149976 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p flannel-149976 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p flannel-149976 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p flannel-149976 sudo containerd config dump                                                                                                          │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p flannel-149976 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p flannel-149976 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p flannel-149976 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p flannel-149976 sudo crio config                                                                                                                     │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p flannel-149976                                                                                                                                      │ flannel-149976         │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ embed-certs-413073     │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 pgrep -a kubelet                                                                                                                      │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/nsswitch.conf                                                                                                           │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/hosts                                                                                                                   │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/resolv.conf                                                                                                             │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo crictl pods                                                                                                                      │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo crictl ps --all                                                                                                                  │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo ip a s                                                                                                                           │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo ip r s                                                                                                                           │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo iptables-save                                                                                                                    │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo iptables -t nat -L -n -v                                                                                                         │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-699289 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-699289 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl status kubelet --all --full --no-pager                                                                                 │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl cat kubelet --no-pager                                                                                                 │ bridge-149976          │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:25:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:25:16.714121  328795 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:25:16.714358  328795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:16.714366  328795 out.go:374] Setting ErrFile to fd 2...
	I1221 20:25:16.714369  328795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:16.714559  328795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:25:16.715013  328795 out.go:368] Setting JSON to false
	I1221 20:25:16.716141  328795 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4066,"bootTime":1766344651,"procs":365,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:25:16.716200  328795 start.go:143] virtualization: kvm guest
	I1221 20:25:16.717939  328795 out.go:179] * [embed-certs-413073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:25:16.719110  328795 notify.go:221] Checking for updates...
	I1221 20:25:16.719132  328795 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:25:16.720248  328795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:25:16.721722  328795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:25:16.722877  328795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:25:16.724044  328795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:25:16.725216  328795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:25:16.726978  328795 config.go:182] Loaded profile config "bridge-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:25:16.727088  328795 config.go:182] Loaded profile config "no-preload-328404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:25:16.727169  328795 config.go:182] Loaded profile config "old-k8s-version-699289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1221 20:25:16.727286  328795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:25:16.752696  328795 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:25:16.752863  328795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:25:16.811399  328795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-21 20:25:16.800909167 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:25:16.811534  328795 docker.go:319] overlay module found
	I1221 20:25:16.812936  328795 out.go:179] * Using the docker driver based on user configuration
	I1221 20:25:16.814062  328795 start.go:309] selected driver: docker
	I1221 20:25:16.814077  328795 start.go:928] validating driver "docker" against <nil>
	I1221 20:25:16.814089  328795 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:25:16.814850  328795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:25:16.878521  328795 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-21 20:25:16.868315711 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:25:16.878721  328795 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 20:25:16.878962  328795 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:25:16.880448  328795 out.go:179] * Using Docker driver with root privileges
	I1221 20:25:16.881518  328795 cni.go:84] Creating CNI manager for ""
	I1221 20:25:16.881574  328795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:25:16.881586  328795 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:25:16.881637  328795 start.go:353] cluster config:
	{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:25:16.882978  328795 out.go:179] * Starting "embed-certs-413073" primary control-plane node in "embed-certs-413073" cluster
	I1221 20:25:16.884087  328795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:25:16.885189  328795 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:25:16.886267  328795 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:25:16.886296  328795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 20:25:16.886309  328795 cache.go:65] Caching tarball of preloaded images
	I1221 20:25:16.886358  328795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:25:16.886394  328795 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:25:16.886406  328795 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 20:25:16.886505  328795 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:25:16.886527  328795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json: {Name:mk1c076a37c21932ca581683fe2285eb44e8c30b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:16.907889  328795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:25:16.907916  328795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:25:16.907935  328795 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:25:16.907970  328795 start.go:360] acquireMachinesLock for embed-certs-413073: {Name:mkd7ba395e71c68e48a93bb569cce5d8b29847bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:25:16.908066  328795 start.go:364] duration metric: took 74.674µs to acquireMachinesLock for "embed-certs-413073"
	I1221 20:25:16.908091  328795 start.go:93] Provisioning new machine with config: &{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:25:16.908192  328795 start.go:125] createHost starting for "" (driver="docker")
	I1221 20:25:16.394187  313519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:16.894140  313519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:17.393362  313519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:17.477648  313519 kubeadm.go:1114] duration metric: took 12.172648912s to wait for elevateKubeSystemPrivileges
	I1221 20:25:17.477687  313519 kubeadm.go:403] duration metric: took 22.301211996s to StartCluster
	I1221 20:25:17.477709  313519 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:17.477781  313519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:25:17.478770  313519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:17.478998  313519 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:25:17.479081  313519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 20:25:17.479081  313519 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:25:17.479173  313519 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-699289"
	I1221 20:25:17.479173  313519 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-699289"
	I1221 20:25:17.479192  313519 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-699289"
	I1221 20:25:17.479206  313519 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-699289"
	I1221 20:25:17.479242  313519 host.go:66] Checking if "old-k8s-version-699289" exists ...
	I1221 20:25:17.479265  313519 config.go:182] Loaded profile config "old-k8s-version-699289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1221 20:25:17.479599  313519 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:25:17.479794  313519 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:25:17.481363  313519 out.go:179] * Verifying Kubernetes components...
	I1221 20:25:17.483438  313519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:25:17.506344  313519 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-699289"
	I1221 20:25:17.506389  313519 host.go:66] Checking if "old-k8s-version-699289" exists ...
	I1221 20:25:17.506640  313519 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:25:15.994609  321051 out.go:252]   - Generating certificates and keys ...
	I1221 20:25:15.994721  321051 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 20:25:15.994846  321051 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 20:25:16.042702  321051 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 20:25:16.186878  321051 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 20:25:16.335460  321051 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 20:25:16.389365  321051 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 20:25:16.469260  321051 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 20:25:16.469522  321051 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-328404] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1221 20:25:16.641504  321051 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 20:25:16.641695  321051 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-328404] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1221 20:25:16.698437  321051 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 20:25:16.878773  321051 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 20:25:16.982810  321051 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 20:25:16.983104  321051 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 20:25:17.038125  321051 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 20:25:17.082266  321051 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 20:25:17.156321  321051 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 20:25:17.344975  321051 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 20:25:17.522817  321051 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 20:25:17.527618  321051 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 20:25:17.536729  321051 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 20:25:17.506969  313519 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:25:17.507843  313519 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:25:17.507866  313519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:25:17.507920  313519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:25:17.536588  313519 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:25:17.536771  313519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:25:17.538447  313519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:25:17.539013  313519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:25:17.567510  313519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:25:17.609559  313519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 20:25:17.660029  313519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:25:17.675147  313519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:25:17.713667  313519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:25:17.940828  313519 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1221 20:25:17.941955  313519 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-699289" to be "Ready" ...
	I1221 20:25:18.167281  313519 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1221 20:25:17.539004  321051 out.go:252]   - Booting up control plane ...
	I1221 20:25:17.539135  321051 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 20:25:17.539333  321051 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 20:25:17.541389  321051 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 20:25:17.562991  321051 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 20:25:17.563808  321051 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 20:25:17.578926  321051 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 20:25:17.579058  321051 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 20:25:17.579124  321051 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 20:25:17.741464  321051 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 20:25:17.741700  321051 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 20:25:18.243515  321051 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.019206ms
	I1221 20:25:18.246383  321051 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 20:25:18.246547  321051 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1221 20:25:18.246680  321051 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 20:25:18.246810  321051 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1221 20:25:15.652311  305216 pod_ready.go:104] pod "coredns-66bc5c9577-r87rl" is not "Ready", error: <nil>
	W1221 20:25:17.653168  305216 pod_ready.go:104] pod "coredns-66bc5c9577-r87rl" is not "Ready", error: <nil>
	W1221 20:25:20.152386  305216 pod_ready.go:104] pod "coredns-66bc5c9577-r87rl" is not "Ready", error: <nil>
	I1221 20:25:18.168410  313519 addons.go:530] duration metric: took 689.326167ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1221 20:25:18.446543  313519 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-699289" context rescaled to 1 replicas
	W1221 20:25:19.945647  313519 node_ready.go:57] node "old-k8s-version-699289" has "Ready":"False" status (will retry)
	I1221 20:25:16.909895  328795 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1221 20:25:16.910155  328795 start.go:159] libmachine.API.Create for "embed-certs-413073" (driver="docker")
	I1221 20:25:16.910189  328795 client.go:173] LocalClient.Create starting
	I1221 20:25:16.910301  328795 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem
	I1221 20:25:16.910337  328795 main.go:144] libmachine: Decoding PEM data...
	I1221 20:25:16.910363  328795 main.go:144] libmachine: Parsing certificate...
	I1221 20:25:16.910434  328795 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem
	I1221 20:25:16.910462  328795 main.go:144] libmachine: Decoding PEM data...
	I1221 20:25:16.910485  328795 main.go:144] libmachine: Parsing certificate...
	I1221 20:25:16.910880  328795 cli_runner.go:164] Run: docker network inspect embed-certs-413073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 20:25:16.930131  328795 cli_runner.go:211] docker network inspect embed-certs-413073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 20:25:16.930213  328795 network_create.go:284] running [docker network inspect embed-certs-413073] to gather additional debugging logs...
	I1221 20:25:16.930246  328795 cli_runner.go:164] Run: docker network inspect embed-certs-413073
	W1221 20:25:16.949828  328795 cli_runner.go:211] docker network inspect embed-certs-413073 returned with exit code 1
	I1221 20:25:16.949862  328795 network_create.go:287] error running [docker network inspect embed-certs-413073]: docker network inspect embed-certs-413073: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-413073 not found
	I1221 20:25:16.949888  328795 network_create.go:289] output of [docker network inspect embed-certs-413073]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-413073 not found
	
	** /stderr **
	I1221 20:25:16.950052  328795 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:25:16.972913  328795 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f29a930c06e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8b:29:89:af:bd} reservation:<nil>}
	I1221 20:25:16.973751  328795 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ef9486b81b4e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:74:fc:8d:d6:e1} reservation:<nil>}
	I1221 20:25:16.974566  328795 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a8eed82beee6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5a:19:43:42:02:f6} reservation:<nil>}
	I1221 20:25:16.975315  328795 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-99f5907a172c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:4c:ce:41:41:8b} reservation:<nil>}
	I1221 20:25:16.976197  328795 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3825326ac2ce IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5e:3e:22:7f:60:d0} reservation:<nil>}
	I1221 20:25:16.977280  328795 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f0b5b0}
	I1221 20:25:16.977330  328795 network_create.go:124] attempt to create docker network embed-certs-413073 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1221 20:25:16.977410  328795 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-413073 embed-certs-413073
	I1221 20:25:17.029351  328795 network_create.go:108] docker network embed-certs-413073 192.168.94.0/24 created
	I1221 20:25:17.029379  328795 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-413073" container
	I1221 20:25:17.029470  328795 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 20:25:17.049642  328795 cli_runner.go:164] Run: docker volume create embed-certs-413073 --label name.minikube.sigs.k8s.io=embed-certs-413073 --label created_by.minikube.sigs.k8s.io=true
	I1221 20:25:17.068088  328795 oci.go:103] Successfully created a docker volume embed-certs-413073
	I1221 20:25:17.068169  328795 cli_runner.go:164] Run: docker run --rm --name embed-certs-413073-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-413073 --entrypoint /usr/bin/test -v embed-certs-413073:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1221 20:25:17.498043  328795 oci.go:107] Successfully prepared a docker volume embed-certs-413073
	I1221 20:25:17.498135  328795 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:25:17.498154  328795 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 20:25:17.498382  328795 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-413073:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 20:25:21.242434  305216 pod_ready.go:94] pod "coredns-66bc5c9577-r87rl" is "Ready"
	I1221 20:25:21.242467  305216 pod_ready.go:86] duration metric: took 28.095843497s for pod "coredns-66bc5c9577-r87rl" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.244889  305216 pod_ready.go:83] waiting for pod "etcd-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.248500  305216 pod_ready.go:94] pod "etcd-bridge-149976" is "Ready"
	I1221 20:25:21.248519  305216 pod_ready.go:86] duration metric: took 3.608019ms for pod "etcd-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.250371  305216 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.253742  305216 pod_ready.go:94] pod "kube-apiserver-bridge-149976" is "Ready"
	I1221 20:25:21.253762  305216 pod_ready.go:86] duration metric: took 3.375225ms for pod "kube-apiserver-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.255695  305216 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.350280  305216 pod_ready.go:94] pod "kube-controller-manager-bridge-149976" is "Ready"
	I1221 20:25:21.350309  305216 pod_ready.go:86] duration metric: took 94.592482ms for pod "kube-controller-manager-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.575840  305216 pod_ready.go:83] waiting for pod "kube-proxy-g7rwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:21.951076  305216 pod_ready.go:94] pod "kube-proxy-g7rwr" is "Ready"
	I1221 20:25:21.951107  305216 pod_ready.go:86] duration metric: took 375.244537ms for pod "kube-proxy-g7rwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:22.150440  305216 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:22.551570  305216 pod_ready.go:94] pod "kube-scheduler-bridge-149976" is "Ready"
	I1221 20:25:22.551602  305216 pod_ready.go:86] duration metric: took 401.135134ms for pod "kube-scheduler-bridge-149976" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:22.551618  305216 pod_ready.go:40] duration metric: took 39.411746161s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:25:22.608553  305216 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:25:22.611111  305216 out.go:179] * Done! kubectl is now configured to use "bridge-149976" cluster and "default" namespace by default
	I1221 20:25:19.254097  321051 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.007538507s
	I1221 20:25:20.316743  321051 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.07021615s
	I1221 20:25:23.247676  321051 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001226707s
	I1221 20:25:23.264856  321051 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 20:25:23.276482  321051 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 20:25:23.285997  321051 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 20:25:23.286312  321051 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-328404 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 20:25:23.296480  321051 kubeadm.go:319] [bootstrap-token] Using token: os9pkf.nh0eph66jul083xi
	I1221 20:25:23.298022  321051 out.go:252]   - Configuring RBAC rules ...
	I1221 20:25:23.298188  321051 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 20:25:23.304812  321051 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 20:25:23.311116  321051 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 20:25:23.313837  321051 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 20:25:23.316645  321051 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 20:25:23.320612  321051 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 20:25:23.653829  321051 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 20:25:24.082744  321051 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1221 20:25:24.656548  321051 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1221 20:25:24.657818  321051 kubeadm.go:319] 
	I1221 20:25:24.657948  321051 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1221 20:25:24.657971  321051 kubeadm.go:319] 
	I1221 20:25:24.658098  321051 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1221 20:25:24.658109  321051 kubeadm.go:319] 
	I1221 20:25:24.658167  321051 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1221 20:25:24.658312  321051 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 20:25:24.658393  321051 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 20:25:24.658405  321051 kubeadm.go:319] 
	I1221 20:25:24.658491  321051 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1221 20:25:24.658531  321051 kubeadm.go:319] 
	I1221 20:25:24.658602  321051 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 20:25:24.658608  321051 kubeadm.go:319] 
	I1221 20:25:24.658683  321051 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1221 20:25:24.658786  321051 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 20:25:24.658879  321051 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 20:25:24.658885  321051 kubeadm.go:319] 
	I1221 20:25:24.659003  321051 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 20:25:24.659109  321051 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1221 20:25:24.659114  321051 kubeadm.go:319] 
	I1221 20:25:24.659241  321051 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token os9pkf.nh0eph66jul083xi \
	I1221 20:25:24.659385  321051 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 \
	I1221 20:25:24.659423  321051 kubeadm.go:319] 	--control-plane 
	I1221 20:25:24.659433  321051 kubeadm.go:319] 
	I1221 20:25:24.659550  321051 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1221 20:25:24.659569  321051 kubeadm.go:319] 
	I1221 20:25:24.659694  321051 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token os9pkf.nh0eph66jul083xi \
	I1221 20:25:24.659863  321051 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 
	I1221 20:25:24.662827  321051 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 20:25:24.662971  321051 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 20:25:24.663004  321051 cni.go:84] Creating CNI manager for ""
	I1221 20:25:24.663018  321051 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:25:24.665748  321051 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1221 20:25:22.445984  313519 node_ready.go:57] node "old-k8s-version-699289" has "Ready":"False" status (will retry)
	W1221 20:25:24.945769  313519 node_ready.go:57] node "old-k8s-version-699289" has "Ready":"False" status (will retry)
	I1221 20:25:22.282033  328795 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-413073:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (4.783598678s)
	I1221 20:25:22.282078  328795 kic.go:203] duration metric: took 4.783920636s to extract preloaded images to volume ...
	W1221 20:25:22.282196  328795 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1221 20:25:22.282259  328795 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1221 20:25:22.282316  328795 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 20:25:22.344056  328795 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-413073 --name embed-certs-413073 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-413073 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-413073 --network embed-certs-413073 --ip 192.168.94.2 --volume embed-certs-413073:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1221 20:25:22.676493  328795 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Running}}
	I1221 20:25:22.697992  328795 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:25:22.721091  328795 cli_runner.go:164] Run: docker exec embed-certs-413073 stat /var/lib/dpkg/alternatives/iptables
	I1221 20:25:22.780938  328795 oci.go:144] the created container "embed-certs-413073" has a running status.
	I1221 20:25:22.780973  328795 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa...
	I1221 20:25:22.968880  328795 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 20:25:23.006394  328795 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:25:23.033094  328795 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 20:25:23.033120  328795 kic_runner.go:114] Args: [docker exec --privileged embed-certs-413073 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 20:25:23.085814  328795 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:25:23.106890  328795 machine.go:94] provisionDockerMachine start ...
	I1221 20:25:23.107005  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:23.129429  328795 main.go:144] libmachine: Using SSH client type: native
	I1221 20:25:23.129879  328795 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1221 20:25:23.129896  328795 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:25:23.281710  328795 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:25:23.281738  328795 ubuntu.go:182] provisioning hostname "embed-certs-413073"
	I1221 20:25:23.281807  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:23.307524  328795 main.go:144] libmachine: Using SSH client type: native
	I1221 20:25:23.307808  328795 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1221 20:25:23.307827  328795 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-413073 && echo "embed-certs-413073" | sudo tee /etc/hostname
	I1221 20:25:23.467026  328795 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:25:23.467115  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:23.488622  328795 main.go:144] libmachine: Using SSH client type: native
	I1221 20:25:23.488946  328795 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1221 20:25:23.488980  328795 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-413073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-413073/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-413073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:25:23.631766  328795 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:25:23.631798  328795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:25:23.631835  328795 ubuntu.go:190] setting up certificates
	I1221 20:25:23.631847  328795 provision.go:84] configureAuth start
	I1221 20:25:23.631910  328795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:25:23.651258  328795 provision.go:143] copyHostCerts
	I1221 20:25:23.651324  328795 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:25:23.651340  328795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:25:23.651426  328795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:25:23.651564  328795 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:25:23.651577  328795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:25:23.651619  328795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:25:23.651722  328795 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:25:23.651733  328795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:25:23.651772  328795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:25:23.651862  328795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.embed-certs-413073 san=[127.0.0.1 192.168.94.2 embed-certs-413073 localhost minikube]
	I1221 20:25:23.738675  328795 provision.go:177] copyRemoteCerts
	I1221 20:25:23.738732  328795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:25:23.738767  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:23.759032  328795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:25:23.868928  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:25:23.901142  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1221 20:25:23.937082  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:25:23.976803  328795 provision.go:87] duration metric: took 344.814853ms to configureAuth
	I1221 20:25:23.976916  328795 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:25:23.977274  328795 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:25:23.977568  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:24.005532  328795 main.go:144] libmachine: Using SSH client type: native
	I1221 20:25:24.005843  328795 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1221 20:25:24.005866  328795 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:25:24.362151  328795 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:25:24.362184  328795 machine.go:97] duration metric: took 1.255266386s to provisionDockerMachine
	I1221 20:25:24.362197  328795 client.go:176] duration metric: took 7.45200063s to LocalClient.Create
	I1221 20:25:24.362221  328795 start.go:167] duration metric: took 7.452066237s to libmachine.API.Create "embed-certs-413073"
	I1221 20:25:24.362285  328795 start.go:293] postStartSetup for "embed-certs-413073" (driver="docker")
	I1221 20:25:24.362302  328795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:25:24.362376  328795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:25:24.362423  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:24.384835  328795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:25:24.488267  328795 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:25:24.492413  328795 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:25:24.492447  328795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:25:24.492462  328795 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:25:24.492528  328795 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:25:24.492645  328795 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:25:24.492812  328795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:25:24.501344  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:25:24.528456  328795 start.go:296] duration metric: took 166.153383ms for postStartSetup
	I1221 20:25:24.528847  328795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:25:24.551326  328795 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:25:24.551632  328795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:25:24.551683  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:24.573014  328795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:25:24.677688  328795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:25:24.683696  328795 start.go:128] duration metric: took 7.775485407s to createHost
	I1221 20:25:24.683721  328795 start.go:83] releasing machines lock for "embed-certs-413073", held for 7.775642307s
	I1221 20:25:24.683790  328795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:25:24.708094  328795 ssh_runner.go:195] Run: cat /version.json
	I1221 20:25:24.708152  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:24.708265  328795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:25:24.708350  328795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:25:24.731387  328795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:25:24.731825  328795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:25:24.905101  328795 ssh_runner.go:195] Run: systemctl --version
	I1221 20:25:24.913840  328795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:25:24.959157  328795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:25:24.965462  328795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:25:24.965608  328795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:25:25.022545  328795 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 20:25:25.022573  328795 start.go:496] detecting cgroup driver to use...
	I1221 20:25:25.022608  328795 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:25:25.022655  328795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:25:25.046355  328795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:25:25.063803  328795 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:25:25.063886  328795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:25:25.088076  328795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:25:25.113918  328795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:25:25.217369  328795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:25:25.312101  328795 docker.go:234] disabling docker service ...
	I1221 20:25:25.312158  328795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:25:25.330404  328795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:25:25.343912  328795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:25:25.426170  328795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:25:25.512946  328795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:25:25.525539  328795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:25:25.539342  328795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:25:25.539401  328795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:25:25.549666  328795 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:25:25.549724  328795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:25:25.558552  328795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:25:25.567013  328795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:25:25.575505  328795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:25:25.583460  328795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:25:25.592163  328795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:25:25.605360  328795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:25:25.615246  328795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:25:25.622783  328795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:25:25.629972  328795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:25:25.714935  328795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:25:26.138345  328795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:25:26.138421  328795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:25:26.142790  328795 start.go:564] Will wait 60s for crictl version
	I1221 20:25:26.142858  328795 ssh_runner.go:195] Run: which crictl
	I1221 20:25:26.146469  328795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:25:26.175556  328795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:25:26.175640  328795 ssh_runner.go:195] Run: crio --version
	I1221 20:25:26.218680  328795 ssh_runner.go:195] Run: crio --version
	I1221 20:25:26.249786  328795 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:25:26.251780  328795 cli_runner.go:164] Run: docker network inspect embed-certs-413073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:25:26.270209  328795 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1221 20:25:26.274526  328795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:25:26.284617  328795 kubeadm.go:884] updating cluster {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:25:26.284724  328795 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:25:26.284765  328795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:25:26.316040  328795 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:25:26.316061  328795 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:25:26.316107  328795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:25:26.342413  328795 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:25:26.342440  328795 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:25:26.342449  328795 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.3 crio true true} ...
	I1221 20:25:26.342556  328795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-413073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:25:26.342619  328795 ssh_runner.go:195] Run: crio config
	I1221 20:25:26.387264  328795 cni.go:84] Creating CNI manager for ""
	I1221 20:25:26.387296  328795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:25:26.387318  328795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:25:26.387343  328795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-413073 NodeName:embed-certs-413073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:25:26.387473  328795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-413073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:25:26.387542  328795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:25:26.395773  328795 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:25:26.395842  328795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:25:26.403548  328795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1221 20:25:26.415735  328795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:25:26.430544  328795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1221 20:25:26.443219  328795 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:25:26.447427  328795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:25:26.457264  328795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:25:26.540861  328795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:25:26.570332  328795 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073 for IP: 192.168.94.2
	I1221 20:25:26.570360  328795 certs.go:195] generating shared ca certs ...
	I1221 20:25:26.570379  328795 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:26.570587  328795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:25:26.570644  328795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:25:26.570658  328795 certs.go:257] generating profile certs ...
	I1221 20:25:26.570734  328795 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.key
	I1221 20:25:26.570750  328795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.crt with IP's: []
	I1221 20:25:24.666851  321051 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 20:25:24.672175  321051 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1221 20:25:24.672193  321051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1221 20:25:24.689518  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 20:25:24.955696  321051 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 20:25:24.955845  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:24.955972  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-328404 minikube.k8s.io/updated_at=2025_12_21T20_25_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=no-preload-328404 minikube.k8s.io/primary=true
	I1221 20:25:25.099109  321051 ops.go:34] apiserver oom_adj: -16
	I1221 20:25:25.099300  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:25.600060  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:26.099386  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:26.600372  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:27.099329  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:27.600027  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:28.099381  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:28.599361  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:29.099368  321051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:25:29.169635  321051 kubeadm.go:1114] duration metric: took 4.213835324s to wait for elevateKubeSystemPrivileges
	I1221 20:25:29.169665  321051 kubeadm.go:403] duration metric: took 13.708058062s to StartCluster
	I1221 20:25:29.169682  321051 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:29.169739  321051 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:25:29.170808  321051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:29.171025  321051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 20:25:29.171025  321051 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:25:29.171122  321051 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:25:29.171217  321051 addons.go:70] Setting storage-provisioner=true in profile "no-preload-328404"
	I1221 20:25:29.171251  321051 config.go:182] Loaded profile config "no-preload-328404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:25:29.171277  321051 addons.go:70] Setting default-storageclass=true in profile "no-preload-328404"
	I1221 20:25:29.171323  321051 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328404"
	I1221 20:25:29.171256  321051 addons.go:239] Setting addon storage-provisioner=true in "no-preload-328404"
	I1221 20:25:29.171469  321051 host.go:66] Checking if "no-preload-328404" exists ...
	I1221 20:25:29.171707  321051 cli_runner.go:164] Run: docker container inspect no-preload-328404 --format={{.State.Status}}
	I1221 20:25:29.172016  321051 cli_runner.go:164] Run: docker container inspect no-preload-328404 --format={{.State.Status}}
	I1221 20:25:29.172740  321051 out.go:179] * Verifying Kubernetes components...
	I1221 20:25:29.173937  321051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:25:29.196872  321051 addons.go:239] Setting addon default-storageclass=true in "no-preload-328404"
	I1221 20:25:29.196920  321051 host.go:66] Checking if "no-preload-328404" exists ...
	I1221 20:25:29.197187  321051 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:25:29.197401  321051 cli_runner.go:164] Run: docker container inspect no-preload-328404 --format={{.State.Status}}
	I1221 20:25:29.198496  321051 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:25:29.198515  321051 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:25:29.198564  321051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328404
	I1221 20:25:29.227508  321051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/no-preload-328404/id_rsa Username:docker}
	I1221 20:25:29.232364  321051 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:25:29.232391  321051 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:25:29.232462  321051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328404
	I1221 20:25:29.262379  321051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/no-preload-328404/id_rsa Username:docker}
	I1221 20:25:29.275834  321051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 20:25:29.341219  321051 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:25:29.357175  321051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:25:29.384200  321051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:25:29.478448  321051 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1221 20:25:29.479704  321051 node_ready.go:35] waiting up to 6m0s for node "no-preload-328404" to be "Ready" ...
	I1221 20:25:29.698597  321051 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1221 20:25:27.444914  313519 node_ready.go:57] node "old-k8s-version-699289" has "Ready":"False" status (will retry)
	W1221 20:25:29.445446  313519 node_ready.go:57] node "old-k8s-version-699289" has "Ready":"False" status (will retry)
	I1221 20:25:30.945361  313519 node_ready.go:49] node "old-k8s-version-699289" is "Ready"
	I1221 20:25:30.945412  313519 node_ready.go:38] duration metric: took 13.003426776s for node "old-k8s-version-699289" to be "Ready" ...
	I1221 20:25:30.945471  313519 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:25:30.945530  313519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:25:30.962955  313519 api_server.go:72] duration metric: took 13.483901695s to wait for apiserver process to appear ...
	I1221 20:25:30.962983  313519 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:25:30.963008  313519 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:25:30.970677  313519 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1221 20:25:30.972315  313519 api_server.go:141] control plane version: v1.28.0
	I1221 20:25:30.972359  313519 api_server.go:131] duration metric: took 9.368947ms to wait for apiserver health ...
	I1221 20:25:30.972386  313519 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:25:30.976667  313519 system_pods.go:59] 8 kube-system pods found
	I1221 20:25:30.976705  313519 system_pods.go:61] "coredns-5dd5756b68-v285b" [bd0c7c2b-2c82-4060-858f-e812ffc45b5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:25:30.976711  313519 system_pods.go:61] "etcd-old-k8s-version-699289" [0a2bd4a3-fea4-4085-9406-1987b025b2c8] Running
	I1221 20:25:30.976715  313519 system_pods.go:61] "kindnet-g5mb8" [f792c035-76d3-4389-8954-def1a475b16d] Running
	I1221 20:25:30.976719  313519 system_pods.go:61] "kube-apiserver-old-k8s-version-699289" [da1b0628-0a03-4220-b324-979501a328f8] Running
	I1221 20:25:30.976724  313519 system_pods.go:61] "kube-controller-manager-old-k8s-version-699289" [d785897f-f615-4b31-b34f-01ef42ce6194] Running
	I1221 20:25:30.976727  313519 system_pods.go:61] "kube-proxy-hsngj" [c431b721-0655-453a-b589-066502c37abc] Running
	I1221 20:25:30.976731  313519 system_pods.go:61] "kube-scheduler-old-k8s-version-699289" [f493fb99-9e07-4ed6-b30f-b174aab1a435] Running
	I1221 20:25:30.976735  313519 system_pods.go:61] "storage-provisioner" [f5aafc9c-4f84-4134-b0a5-878e925fefbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:25:30.976740  313519 system_pods.go:74] duration metric: took 4.348265ms to wait for pod list to return data ...
	I1221 20:25:30.976745  313519 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:25:30.978965  313519 default_sa.go:45] found service account: "default"
	I1221 20:25:30.978985  313519 default_sa.go:55] duration metric: took 2.233949ms for default service account to be created ...
	I1221 20:25:30.978995  313519 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:25:30.982714  313519 system_pods.go:86] 8 kube-system pods found
	I1221 20:25:30.982743  313519 system_pods.go:89] "coredns-5dd5756b68-v285b" [bd0c7c2b-2c82-4060-858f-e812ffc45b5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:25:30.982750  313519 system_pods.go:89] "etcd-old-k8s-version-699289" [0a2bd4a3-fea4-4085-9406-1987b025b2c8] Running
	I1221 20:25:30.982763  313519 system_pods.go:89] "kindnet-g5mb8" [f792c035-76d3-4389-8954-def1a475b16d] Running
	I1221 20:25:30.982778  313519 system_pods.go:89] "kube-apiserver-old-k8s-version-699289" [da1b0628-0a03-4220-b324-979501a328f8] Running
	I1221 20:25:30.982786  313519 system_pods.go:89] "kube-controller-manager-old-k8s-version-699289" [d785897f-f615-4b31-b34f-01ef42ce6194] Running
	I1221 20:25:30.982792  313519 system_pods.go:89] "kube-proxy-hsngj" [c431b721-0655-453a-b589-066502c37abc] Running
	I1221 20:25:30.982813  313519 system_pods.go:89] "kube-scheduler-old-k8s-version-699289" [f493fb99-9e07-4ed6-b30f-b174aab1a435] Running
	I1221 20:25:30.982821  313519 system_pods.go:89] "storage-provisioner" [f5aafc9c-4f84-4134-b0a5-878e925fefbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:25:30.982851  313519 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1221 20:25:26.987823  328795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.crt ...
	I1221 20:25:26.987849  328795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.crt: {Name:mk9ee74faeffadefafc6feed2c6db535bedadd27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:26.988001  328795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.key ...
	I1221 20:25:26.988012  328795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.key: {Name:mk23ea2315dc05ba22d6a4121640a9c83f683cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:26.988087  328795 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key.865f7206
	I1221 20:25:26.988104  328795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt.865f7206 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1221 20:25:27.039757  328795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt.865f7206 ...
	I1221 20:25:27.039785  328795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt.865f7206: {Name:mk265dda1b285c394a533b0d10e52466cb7f6c20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:27.039960  328795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key.865f7206 ...
	I1221 20:25:27.039978  328795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key.865f7206: {Name:mk9e7eb2ead5d107f7aa657766ced74f5045997d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:27.040095  328795 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt.865f7206 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt
	I1221 20:25:27.040203  328795 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key.865f7206 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key
	I1221 20:25:27.040308  328795 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key
	I1221 20:25:27.040334  328795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.crt with IP's: []
	I1221 20:25:27.072568  328795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.crt ...
	I1221 20:25:27.072595  328795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.crt: {Name:mk9e2a6e4c9ce8d9db6e0ea0264dcf8aea79d0f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:27.072759  328795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key ...
	I1221 20:25:27.072779  328795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key: {Name:mke00038289624fa85c3cc8564a68b92431ba304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:27.073018  328795 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:25:27.073074  328795 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:25:27.073090  328795 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:25:27.073125  328795 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:25:27.073160  328795 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:25:27.073193  328795 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:25:27.073260  328795 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:25:27.074036  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:25:27.093472  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:25:27.111048  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:25:27.128757  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:25:27.146646  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1221 20:25:27.165985  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 20:25:27.182904  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:25:27.200576  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:25:27.217739  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:25:27.236148  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:25:27.253851  328795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:25:27.270578  328795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:25:27.282876  328795 ssh_runner.go:195] Run: openssl version
	I1221 20:25:27.288711  328795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:25:27.296572  328795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:25:27.303959  328795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:25:27.307477  328795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:25:27.307526  328795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:25:27.344316  328795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:25:27.351929  328795 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12711.pem /etc/ssl/certs/51391683.0
	I1221 20:25:27.359345  328795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:25:27.366534  328795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:25:27.373572  328795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:25:27.376984  328795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:25:27.377029  328795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:25:27.412886  328795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:25:27.420470  328795 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127112.pem /etc/ssl/certs/3ec20f2e.0
	I1221 20:25:27.428079  328795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:25:27.435482  328795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:25:27.442878  328795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:25:27.446734  328795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:25:27.446779  328795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:25:27.487381  328795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:25:27.495810  328795 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 20:25:27.503595  328795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:25:27.507624  328795 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 20:25:27.507683  328795 kubeadm.go:401] StartCluster: {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:25:27.507753  328795 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:25:27.507806  328795 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:25:27.534290  328795 cri.go:96] found id: ""
	I1221 20:25:27.534366  328795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:25:27.542659  328795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 20:25:27.550627  328795 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 20:25:27.550676  328795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 20:25:27.558298  328795 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 20:25:27.558317  328795 kubeadm.go:158] found existing configuration files:
	
	I1221 20:25:27.558360  328795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 20:25:27.565485  328795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 20:25:27.565539  328795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 20:25:27.572535  328795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 20:25:27.579481  328795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 20:25:27.579519  328795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 20:25:27.586732  328795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 20:25:27.594140  328795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 20:25:27.594209  328795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 20:25:27.601697  328795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 20:25:27.610129  328795 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 20:25:27.610173  328795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 20:25:27.617989  328795 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 20:25:27.682466  328795 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 20:25:27.745055  328795 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 20:25:31.181936  313519 system_pods.go:86] 8 kube-system pods found
	I1221 20:25:31.181999  313519 system_pods.go:89] "coredns-5dd5756b68-v285b" [bd0c7c2b-2c82-4060-858f-e812ffc45b5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:25:31.182024  313519 system_pods.go:89] "etcd-old-k8s-version-699289" [0a2bd4a3-fea4-4085-9406-1987b025b2c8] Running
	I1221 20:25:31.182034  313519 system_pods.go:89] "kindnet-g5mb8" [f792c035-76d3-4389-8954-def1a475b16d] Running
	I1221 20:25:31.182048  313519 system_pods.go:89] "kube-apiserver-old-k8s-version-699289" [da1b0628-0a03-4220-b324-979501a328f8] Running
	I1221 20:25:31.182055  313519 system_pods.go:89] "kube-controller-manager-old-k8s-version-699289" [d785897f-f615-4b31-b34f-01ef42ce6194] Running
	I1221 20:25:31.182059  313519 system_pods.go:89] "kube-proxy-hsngj" [c431b721-0655-453a-b589-066502c37abc] Running
	I1221 20:25:31.182065  313519 system_pods.go:89] "kube-scheduler-old-k8s-version-699289" [f493fb99-9e07-4ed6-b30f-b174aab1a435] Running
	I1221 20:25:31.182074  313519 system_pods.go:89] "storage-provisioner" [f5aafc9c-4f84-4134-b0a5-878e925fefbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:25:31.485176  313519 system_pods.go:86] 8 kube-system pods found
	I1221 20:25:31.485210  313519 system_pods.go:89] "coredns-5dd5756b68-v285b" [bd0c7c2b-2c82-4060-858f-e812ffc45b5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:25:31.485217  313519 system_pods.go:89] "etcd-old-k8s-version-699289" [0a2bd4a3-fea4-4085-9406-1987b025b2c8] Running
	I1221 20:25:31.485250  313519 system_pods.go:89] "kindnet-g5mb8" [f792c035-76d3-4389-8954-def1a475b16d] Running
	I1221 20:25:31.485256  313519 system_pods.go:89] "kube-apiserver-old-k8s-version-699289" [da1b0628-0a03-4220-b324-979501a328f8] Running
	I1221 20:25:31.485262  313519 system_pods.go:89] "kube-controller-manager-old-k8s-version-699289" [d785897f-f615-4b31-b34f-01ef42ce6194] Running
	I1221 20:25:31.485270  313519 system_pods.go:89] "kube-proxy-hsngj" [c431b721-0655-453a-b589-066502c37abc] Running
	I1221 20:25:31.485279  313519 system_pods.go:89] "kube-scheduler-old-k8s-version-699289" [f493fb99-9e07-4ed6-b30f-b174aab1a435] Running
	I1221 20:25:31.485295  313519 system_pods.go:89] "storage-provisioner" [f5aafc9c-4f84-4134-b0a5-878e925fefbd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:25:31.848882  313519 system_pods.go:86] 8 kube-system pods found
	I1221 20:25:31.848910  313519 system_pods.go:89] "coredns-5dd5756b68-v285b" [bd0c7c2b-2c82-4060-858f-e812ffc45b5e] Running
	I1221 20:25:31.848916  313519 system_pods.go:89] "etcd-old-k8s-version-699289" [0a2bd4a3-fea4-4085-9406-1987b025b2c8] Running
	I1221 20:25:31.848920  313519 system_pods.go:89] "kindnet-g5mb8" [f792c035-76d3-4389-8954-def1a475b16d] Running
	I1221 20:25:31.848924  313519 system_pods.go:89] "kube-apiserver-old-k8s-version-699289" [da1b0628-0a03-4220-b324-979501a328f8] Running
	I1221 20:25:31.848928  313519 system_pods.go:89] "kube-controller-manager-old-k8s-version-699289" [d785897f-f615-4b31-b34f-01ef42ce6194] Running
	I1221 20:25:31.848931  313519 system_pods.go:89] "kube-proxy-hsngj" [c431b721-0655-453a-b589-066502c37abc] Running
	I1221 20:25:31.848937  313519 system_pods.go:89] "kube-scheduler-old-k8s-version-699289" [f493fb99-9e07-4ed6-b30f-b174aab1a435] Running
	I1221 20:25:31.848940  313519 system_pods.go:89] "storage-provisioner" [f5aafc9c-4f84-4134-b0a5-878e925fefbd] Running
	I1221 20:25:31.848947  313519 system_pods.go:126] duration metric: took 869.947431ms to wait for k8s-apps to be running ...
	I1221 20:25:31.848957  313519 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:25:31.848999  313519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:25:31.861772  313519 system_svc.go:56] duration metric: took 12.807597ms WaitForService to wait for kubelet
	I1221 20:25:31.861800  313519 kubeadm.go:587] duration metric: took 14.382771564s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:25:31.861820  313519 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:25:31.864275  313519 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:25:31.864301  313519 node_conditions.go:123] node cpu capacity is 8
	I1221 20:25:31.864321  313519 node_conditions.go:105] duration metric: took 2.494428ms to run NodePressure ...
	I1221 20:25:31.864336  313519 start.go:242] waiting for startup goroutines ...
	I1221 20:25:31.864349  313519 start.go:247] waiting for cluster config update ...
	I1221 20:25:31.864366  313519 start.go:256] writing updated cluster config ...
	I1221 20:25:31.864716  313519 ssh_runner.go:195] Run: rm -f paused
	I1221 20:25:31.868205  313519 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:25:31.872280  313519 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-v285b" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:31.876415  313519 pod_ready.go:94] pod "coredns-5dd5756b68-v285b" is "Ready"
	I1221 20:25:31.876445  313519 pod_ready.go:86] duration metric: took 4.139342ms for pod "coredns-5dd5756b68-v285b" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:31.878983  313519 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:31.882610  313519 pod_ready.go:94] pod "etcd-old-k8s-version-699289" is "Ready"
	I1221 20:25:31.882628  313519 pod_ready.go:86] duration metric: took 3.627268ms for pod "etcd-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:31.885210  313519 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:31.889309  313519 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-699289" is "Ready"
	I1221 20:25:31.889326  313519 pod_ready.go:86] duration metric: took 4.076565ms for pod "kube-apiserver-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:31.891597  313519 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:32.272687  313519 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-699289" is "Ready"
	I1221 20:25:32.272717  313519 pod_ready.go:86] duration metric: took 381.101516ms for pod "kube-controller-manager-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:32.473749  313519 pod_ready.go:83] waiting for pod "kube-proxy-hsngj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:32.872844  313519 pod_ready.go:94] pod "kube-proxy-hsngj" is "Ready"
	I1221 20:25:32.872871  313519 pod_ready.go:86] duration metric: took 399.093786ms for pod "kube-proxy-hsngj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:33.073380  313519 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:33.473213  313519 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-699289" is "Ready"
	I1221 20:25:33.473259  313519 pod_ready.go:86] duration metric: took 399.843591ms for pod "kube-scheduler-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:33.473275  313519 pod_ready.go:40] duration metric: took 1.605036492s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:25:33.518883  313519 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1221 20:25:33.520764  313519 out.go:203] 
	W1221 20:25:33.522319  313519 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1221 20:25:33.523468  313519 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1221 20:25:33.525121  313519 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-699289" cluster and "default" namespace by default
	I1221 20:25:29.699935  321051 addons.go:530] duration metric: took 528.814213ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1221 20:25:29.984096  321051 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-328404" context rescaled to 1 replicas
	W1221 20:25:31.483290  321051 node_ready.go:57] node "no-preload-328404" has "Ready":"False" status (will retry)
	W1221 20:25:33.483393  321051 node_ready.go:57] node "no-preload-328404" has "Ready":"False" status (will retry)
	W1221 20:25:35.983922  321051 node_ready.go:57] node "no-preload-328404" has "Ready":"False" status (will retry)
	W1221 20:25:38.483401  321051 node_ready.go:57] node "no-preload-328404" has "Ready":"False" status (will retry)
	I1221 20:25:40.586999  328795 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1221 20:25:40.587107  328795 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 20:25:40.587285  328795 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1221 20:25:40.587380  328795 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1221 20:25:40.587432  328795 kubeadm.go:319] OS: Linux
	I1221 20:25:40.587520  328795 kubeadm.go:319] CGROUPS_CPU: enabled
	I1221 20:25:40.587592  328795 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1221 20:25:40.587666  328795 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1221 20:25:40.587733  328795 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1221 20:25:40.587796  328795 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1221 20:25:40.587863  328795 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1221 20:25:40.587942  328795 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1221 20:25:40.588015  328795 kubeadm.go:319] CGROUPS_IO: enabled
	I1221 20:25:40.588112  328795 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 20:25:40.588266  328795 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 20:25:40.588375  328795 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 20:25:40.588469  328795 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 20:25:40.589739  328795 out.go:252]   - Generating certificates and keys ...
	I1221 20:25:40.589833  328795 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 20:25:40.589907  328795 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 20:25:40.590012  328795 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 20:25:40.590100  328795 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 20:25:40.590193  328795 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 20:25:40.590283  328795 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 20:25:40.590355  328795 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 20:25:40.590506  328795 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-413073 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1221 20:25:40.590603  328795 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 20:25:40.590756  328795 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-413073 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1221 20:25:40.590842  328795 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 20:25:40.590936  328795 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 20:25:40.591006  328795 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 20:25:40.591078  328795 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 20:25:40.591150  328795 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 20:25:40.591405  328795 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 20:25:40.591479  328795 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 20:25:40.591558  328795 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 20:25:40.591649  328795 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 20:25:40.591775  328795 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 20:25:40.591872  328795 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 20:25:40.593077  328795 out.go:252]   - Booting up control plane ...
	I1221 20:25:40.593190  328795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 20:25:40.593316  328795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 20:25:40.593383  328795 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 20:25:40.593469  328795 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 20:25:40.593554  328795 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 20:25:40.593673  328795 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 20:25:40.593756  328795 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 20:25:40.593797  328795 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 20:25:40.593952  328795 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 20:25:40.594076  328795 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 20:25:40.594156  328795 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000715895s
	I1221 20:25:40.594300  328795 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 20:25:40.594416  328795 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1221 20:25:40.594561  328795 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 20:25:40.594663  328795 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 20:25:40.594763  328795 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005247447s
	I1221 20:25:40.594867  328795 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.226212212s
	I1221 20:25:40.594959  328795 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001315916s
	I1221 20:25:40.595074  328795 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 20:25:40.595244  328795 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 20:25:40.595327  328795 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 20:25:40.595517  328795 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-413073 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 20:25:40.595602  328795 kubeadm.go:319] [bootstrap-token] Using token: 3t3zzr.iyhhw6frl4t182u0
	I1221 20:25:40.597452  328795 out.go:252]   - Configuring RBAC rules ...
	I1221 20:25:40.597547  328795 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 20:25:40.597643  328795 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 20:25:40.597778  328795 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 20:25:40.597890  328795 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 20:25:40.598016  328795 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 20:25:40.598095  328795 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 20:25:40.598195  328795 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 20:25:40.598273  328795 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1221 20:25:40.598333  328795 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1221 20:25:40.598343  328795 kubeadm.go:319] 
	I1221 20:25:40.598392  328795 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1221 20:25:40.598402  328795 kubeadm.go:319] 
	I1221 20:25:40.598533  328795 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1221 20:25:40.598540  328795 kubeadm.go:319] 
	I1221 20:25:40.598562  328795 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1221 20:25:40.598652  328795 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 20:25:40.598738  328795 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 20:25:40.598747  328795 kubeadm.go:319] 
	I1221 20:25:40.598823  328795 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1221 20:25:40.598831  328795 kubeadm.go:319] 
	I1221 20:25:40.598897  328795 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 20:25:40.598906  328795 kubeadm.go:319] 
	I1221 20:25:40.598982  328795 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1221 20:25:40.599085  328795 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 20:25:40.599153  328795 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 20:25:40.599161  328795 kubeadm.go:319] 
	I1221 20:25:40.599259  328795 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 20:25:40.599339  328795 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1221 20:25:40.599355  328795 kubeadm.go:319] 
	I1221 20:25:40.599479  328795 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3t3zzr.iyhhw6frl4t182u0 \
	I1221 20:25:40.599599  328795 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 \
	I1221 20:25:40.599638  328795 kubeadm.go:319] 	--control-plane 
	I1221 20:25:40.599647  328795 kubeadm.go:319] 
	I1221 20:25:40.599747  328795 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1221 20:25:40.599754  328795 kubeadm.go:319] 
	I1221 20:25:40.599825  328795 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3t3zzr.iyhhw6frl4t182u0 \
	I1221 20:25:40.599929  328795 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 
	I1221 20:25:40.599941  328795 cni.go:84] Creating CNI manager for ""
	I1221 20:25:40.599948  328795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:25:40.601846  328795 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 21 20:25:30 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:30.896918693Z" level=info msg="Starting container: df7178ff409bb5f9c1900ea5aa3287cc040716f9602dabf652bffa4ebd4cd5bc" id=11f77fc1-a7ba-41e8-8578-ca64558acd36 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:25:30 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:30.899710337Z" level=info msg="Started container" PID=2141 containerID=df7178ff409bb5f9c1900ea5aa3287cc040716f9602dabf652bffa4ebd4cd5bc description=kube-system/coredns-5dd5756b68-v285b/coredns id=11f77fc1-a7ba-41e8-8578-ca64558acd36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd6c3a8f477f87e8201418e449f342182dcafb39d658694e1bba43862591f59f
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.970429892Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a2f5076f-a9f4-4a3b-9ea0-bff7561e1028 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.970511293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.975868276Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:02a2bc6488ac4565a7725f4967b90b8f460b5bda9c1172793eb49c1a7c300b29 UID:8c49f147-ca7a-4fd1-8d64-3e54460c48f2 NetNS:/var/run/netns/e3f668d8-fa5f-4a98-a413-303835dd84ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000138ac0}] Aliases:map[]}"
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.97590529Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.986606661Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:02a2bc6488ac4565a7725f4967b90b8f460b5bda9c1172793eb49c1a7c300b29 UID:8c49f147-ca7a-4fd1-8d64-3e54460c48f2 NetNS:/var/run/netns/e3f668d8-fa5f-4a98-a413-303835dd84ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000138ac0}] Aliases:map[]}"
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.986741641Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.987465757Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.988635861Z" level=info msg="Ran pod sandbox 02a2bc6488ac4565a7725f4967b90b8f460b5bda9c1172793eb49c1a7c300b29 with infra container: default/busybox/POD" id=a2f5076f-a9f4-4a3b-9ea0-bff7561e1028 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.990204038Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=13833927-9be3-49f0-9707-781067c04611 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.990363322Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=13833927-9be3-49f0-9707-781067c04611 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.990417133Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=13833927-9be3-49f0-9707-781067c04611 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.990911921Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d14ed63-c8ae-4695-aee8-c689db29b856 name=/runtime.v1.ImageService/PullImage
	Dec 21 20:25:33 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:33.993597721Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.598017375Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3d14ed63-c8ae-4695-aee8-c689db29b856 name=/runtime.v1.ImageService/PullImage
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.598797745Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9711d1b5-9f22-4317-9232-59c48a16d4b3 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.600161522Z" level=info msg="Creating container: default/busybox/busybox" id=315cd4d8-acc5-4aa6-91eb-4f483ca78d2d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.600304792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.603749587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.604138457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.628708466Z" level=info msg="Created container 807b9cc027497606b150353fa05dcbc18c8a8b59900395399893b8d83c6de51f: default/busybox/busybox" id=315cd4d8-acc5-4aa6-91eb-4f483ca78d2d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.629253957Z" level=info msg="Starting container: 807b9cc027497606b150353fa05dcbc18c8a8b59900395399893b8d83c6de51f" id=dca6082c-e9a2-4244-9eea-6eb84f220cc4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:25:34 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:34.630910478Z" level=info msg="Started container" PID=2219 containerID=807b9cc027497606b150353fa05dcbc18c8a8b59900395399893b8d83c6de51f description=default/busybox/busybox id=dca6082c-e9a2-4244-9eea-6eb84f220cc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a2bc6488ac4565a7725f4967b90b8f460b5bda9c1172793eb49c1a7c300b29
	Dec 21 20:25:40 old-k8s-version-699289 crio[779]: time="2025-12-21T20:25:40.772252866Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	807b9cc027497       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   02a2bc6488ac4       busybox                                          default
	df7178ff409bb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   cd6c3a8f477f8       coredns-5dd5756b68-v285b                         kube-system
	d6556483cd3ef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   3c3f3ac326c0e       storage-provisioner                              kube-system
	c18d6787ae36f       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   af304c6b39753       kindnet-g5mb8                                    kube-system
	7b15763a89c9d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   7a2b8b92fa6fa       kube-proxy-hsngj                                 kube-system
	7ffa8dd884fbd       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   34adaed325113       kube-apiserver-old-k8s-version-699289            kube-system
	85ae888c513ca       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   2caada1f65a97       etcd-old-k8s-version-699289                      kube-system
	bf47e8db77ca7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   ec320a7a77fac       kube-scheduler-old-k8s-version-699289            kube-system
	a4bd4585441c9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   6ad7745596344       kube-controller-manager-old-k8s-version-699289   kube-system
	
	
	==> coredns [df7178ff409bb5f9c1900ea5aa3287cc040716f9602dabf652bffa4ebd4cd5bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46644 - 51406 "HINFO IN 1571431940318844789.6298405012263025070. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069547814s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-699289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-699289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=old-k8s-version-699289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-699289
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:25:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:25:35 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:25:35 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:25:35 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:25:35 +0000   Sun, 21 Dec 2025 20:25:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-699289
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                5608b3c9-c686-468f-89f8-92ad8cb9ae20
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-v285b                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-699289                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-g5mb8                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-699289             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-699289    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-hsngj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-699289             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node old-k8s-version-699289 event: Registered Node old-k8s-version-699289 in Controller
	  Normal  NodeReady                12s                kubelet          Node old-k8s-version-699289 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [85ae888c513caed2158c8138752b4b48536ff5ad1582f31aa35c0a14d118920e] <==
	{"level":"info","ts":"2025-12-21T20:24:59.951932Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:24:59.952096Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:24:59.952132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:24:59.952155Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:24:59.952195Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:24:59.952948Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:24:59.953105Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:24:59.953196Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:24:59.953367Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-21T20:24:59.953382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:25:04.092568Z","caller":"traceutil/trace.go:171","msg":"trace[330571468] transaction","detail":"{read_only:false; response_revision:252; number_of_response:1; }","duration":"138.039373ms","start":"2025-12-21T20:25:03.954505Z","end":"2025-12-21T20:25:04.092544Z","steps":["trace[330571468] 'process raft request'  (duration: 58.496862ms)","trace[330571468] 'compare'  (duration: 79.406482ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:25:14.270943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.785966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-12-21T20:25:14.271023Z","caller":"traceutil/trace.go:171","msg":"trace[1524030515] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:313; }","duration":"144.888899ms","start":"2025-12-21T20:25:14.126122Z","end":"2025-12-21T20:25:14.271011Z","steps":["trace[1524030515] 'range keys from in-memory index tree'  (duration: 144.66853ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:14.546021Z","caller":"traceutil/trace.go:171","msg":"trace[1383659515] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"169.29362ms","start":"2025-12-21T20:25:14.376708Z","end":"2025-12-21T20:25:14.546001Z","steps":["trace[1383659515] 'process raft request'  (duration: 169.03936ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:14.663053Z","caller":"traceutil/trace.go:171","msg":"trace[1867243638] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"109.186769ms","start":"2025-12-21T20:25:14.553844Z","end":"2025-12-21T20:25:14.663031Z","steps":["trace[1867243638] 'process raft request'  (duration: 82.755165ms)","trace[1867243638] 'compare'  (duration: 26.32979ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:25:15.754804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.415289ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357466380847113 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-699289\" mod_revision:322 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-699289\" value_size:7179 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-699289\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-21T20:25:15.754955Z","caller":"traceutil/trace.go:171","msg":"trace[747712779] linearizableReadLoop","detail":"{readStateIndex:334; appliedIndex:332; }","duration":"292.310831ms","start":"2025-12-21T20:25:15.462633Z","end":"2025-12-21T20:25:15.754943Z","steps":["trace[747712779] 'read index received'  (duration: 39.388878ms)","trace[747712779] 'applied index is now lower than readState.Index'  (duration: 252.921108ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T20:25:15.755008Z","caller":"traceutil/trace.go:171","msg":"trace[1344170087] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"328.861286ms","start":"2025-12-21T20:25:15.426127Z","end":"2025-12-21T20:25:15.754989Z","steps":["trace[1344170087] 'process raft request'  (duration: 328.763835ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:25:15.755116Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.499712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:25:15.75515Z","caller":"traceutil/trace.go:171","msg":"trace[1599471646] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:324; }","duration":"292.543553ms","start":"2025-12-21T20:25:15.462597Z","end":"2025-12-21T20:25:15.755141Z","steps":["trace[1599471646] 'agreement among raft nodes before linearized reading'  (duration: 292.450207ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:15.75503Z","caller":"traceutil/trace.go:171","msg":"trace[1120753826] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"370.284775ms","start":"2025-12-21T20:25:15.384728Z","end":"2025-12-21T20:25:15.755013Z","steps":["trace[1120753826] 'process raft request'  (duration: 117.220057ms)","trace[1120753826] 'compare'  (duration: 252.309765ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:25:15.755398Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-21T20:25:15.384714Z","time spent":"370.629598ms","remote":"127.0.0.1:48060","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7251,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-699289\" mod_revision:322 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-699289\" value_size:7179 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-699289\" > >"}
	{"level":"warn","ts":"2025-12-21T20:25:15.755166Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-21T20:25:15.426106Z","time spent":"328.99292ms","remote":"127.0.0.1:48080","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":178,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/ttl-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/ttl-controller\" value_size:119 >> failure:<>"}
	{"level":"warn","ts":"2025-12-21T20:25:15.991745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.55525ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357466380847118 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" value_size:129 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-21T20:25:15.991838Z","caller":"traceutil/trace.go:171","msg":"trace[1376867940] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"229.02218ms","start":"2025-12-21T20:25:15.762794Z","end":"2025-12-21T20:25:15.991817Z","steps":["trace[1376867940] 'process raft request'  (duration: 118.327969ms)","trace[1376867940] 'compare'  (duration: 110.423245ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:25:42 up  1:08,  0 user,  load average: 4.43, 3.86, 2.66
	Linux old-k8s-version-699289 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c18d6787ae36fd3ca76928e9d7483cf59eb19dc57333088fca2787c00a8bc4f7] <==
	I1221 20:25:19.941957       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:25:19.942259       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1221 20:25:19.942443       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:25:19.942468       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:25:19.942497       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:25:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:25:20.213308       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:25:20.213338       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:25:20.213351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:25:20.215483       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:25:20.539417       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:25:20.539447       1 metrics.go:72] Registering metrics
	I1221 20:25:20.539517       1 controller.go:711] "Syncing nftables rules"
	I1221 20:25:30.219340       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:25:30.219403       1 main.go:301] handling current node
	I1221 20:25:40.216803       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:25:40.216831       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ffa8dd884fbdcf36339e78320fbbf0414f81caf2859c8cf4e2f2f8f744b7d36] <==
	E1221 20:25:01.350844       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:25:01.364505       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:25:01.375163       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","catch-all","exempt","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:25:01.391847       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:25:01.409274       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:25:01.413297       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1221 20:25:02.210808       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1221 20:25:02.214623       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1221 20:25:02.214644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:25:02.669943       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:25:02.711699       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:25:02.816204       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 20:25:02.821404       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1221 20:25:02.822526       1 controller.go:624] quota admission added evaluator for: endpoints
	I1221 20:25:02.826572       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:25:03.262887       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1221 20:25:04.381554       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1221 20:25:04.392766       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 20:25:04.402564       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	E1221 20:25:11.306012       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1221 20:25:16.834082       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1221 20:25:17.233023       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1221 20:25:21.306405       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:25:31.306874       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:25:41.307669       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [a4bd4585441c935637a493687312381998f6bd9d4d4df30a110922210b3bb6cd] <==
	I1221 20:25:16.430428       1 shared_informer.go:318] Caches are synced for daemon sets
	I1221 20:25:16.432721       1 shared_informer.go:318] Caches are synced for resource quota
	I1221 20:25:16.487819       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1221 20:25:16.838280       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1221 20:25:16.857400       1 shared_informer.go:318] Caches are synced for garbage collector
	I1221 20:25:16.872752       1 shared_informer.go:318] Caches are synced for garbage collector
	I1221 20:25:16.872782       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1221 20:25:17.241766       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g5mb8"
	I1221 20:25:17.243926       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hsngj"
	I1221 20:25:17.341657       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-v285b"
	I1221 20:25:17.348798       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gvtzf"
	I1221 20:25:17.358468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="520.497697ms"
	I1221 20:25:17.365548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.899735ms"
	I1221 20:25:17.365675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.803µs"
	I1221 20:25:17.968247       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1221 20:25:17.981981       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-gvtzf"
	I1221 20:25:17.990704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.808073ms"
	I1221 20:25:17.997959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.200192ms"
	I1221 20:25:17.998680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="171.261µs"
	I1221 20:25:30.529112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.386µs"
	I1221 20:25:30.552174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.741µs"
	I1221 20:25:31.330312       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1221 20:25:31.562054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="164.657µs"
	I1221 20:25:31.594502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.02437ms"
	I1221 20:25:31.597505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.199µs"
	
	
	==> kube-proxy [7b15763a89c9debec71c03d64c746ca776399eb685753e794cad2890cd19ea12] <==
	I1221 20:25:17.793059       1 server_others.go:69] "Using iptables proxy"
	I1221 20:25:17.806168       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1221 20:25:17.839861       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:25:17.846667       1 server_others.go:152] "Using iptables Proxier"
	I1221 20:25:17.846730       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1221 20:25:17.846745       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1221 20:25:17.846791       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1221 20:25:17.847104       1 server.go:846] "Version info" version="v1.28.0"
	I1221 20:25:17.847174       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:25:17.848388       1 config.go:315] "Starting node config controller"
	I1221 20:25:17.848457       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1221 20:25:17.848581       1 config.go:188] "Starting service config controller"
	I1221 20:25:17.848606       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1221 20:25:17.848630       1 config.go:97] "Starting endpoint slice config controller"
	I1221 20:25:17.848642       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1221 20:25:17.949398       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1221 20:25:17.949473       1 shared_informer.go:318] Caches are synced for service config
	I1221 20:25:17.949773       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [bf47e8db77ca748bec1a1ced8baeddda9336eb215e746bb61932aae685f222d7] <==
	W1221 20:25:01.291996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1221 20:25:01.292446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1221 20:25:01.292009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 20:25:01.292463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1221 20:25:01.292024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1221 20:25:01.292513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1221 20:25:01.292324       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1221 20:25:01.292531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1221 20:25:01.292333       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1221 20:25:01.292553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1221 20:25:01.292878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1221 20:25:01.292904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1221 20:25:02.205772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1221 20:25:02.205812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1221 20:25:02.246602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1221 20:25:02.246647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1221 20:25:02.299851       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1221 20:25:02.299894       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:25:02.320665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1221 20:25:02.320705       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1221 20:25:02.325018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 20:25:02.325047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1221 20:25:02.446732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1221 20:25:02.446846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1221 20:25:05.386731       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 21 20:25:16 old-k8s-version-699289 kubelet[1396]: I1221 20:25:16.452185    1396 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.248586    1396 topology_manager.go:215] "Topology Admit Handler" podUID="f792c035-76d3-4389-8954-def1a475b16d" podNamespace="kube-system" podName="kindnet-g5mb8"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.251655    1396 topology_manager.go:215] "Topology Admit Handler" podUID="c431b721-0655-453a-b589-066502c37abc" podNamespace="kube-system" podName="kube-proxy-hsngj"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.348666    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f792c035-76d3-4389-8954-def1a475b16d-xtables-lock\") pod \"kindnet-g5mb8\" (UID: \"f792c035-76d3-4389-8954-def1a475b16d\") " pod="kube-system/kindnet-g5mb8"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.348753    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c431b721-0655-453a-b589-066502c37abc-kube-proxy\") pod \"kube-proxy-hsngj\" (UID: \"c431b721-0655-453a-b589-066502c37abc\") " pod="kube-system/kube-proxy-hsngj"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.348793    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdsr\" (UniqueName: \"kubernetes.io/projected/c431b721-0655-453a-b589-066502c37abc-kube-api-access-8kdsr\") pod \"kube-proxy-hsngj\" (UID: \"c431b721-0655-453a-b589-066502c37abc\") " pod="kube-system/kube-proxy-hsngj"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.348850    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f792c035-76d3-4389-8954-def1a475b16d-lib-modules\") pod \"kindnet-g5mb8\" (UID: \"f792c035-76d3-4389-8954-def1a475b16d\") " pod="kube-system/kindnet-g5mb8"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.348882    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chj97\" (UniqueName: \"kubernetes.io/projected/f792c035-76d3-4389-8954-def1a475b16d-kube-api-access-chj97\") pod \"kindnet-g5mb8\" (UID: \"f792c035-76d3-4389-8954-def1a475b16d\") " pod="kube-system/kindnet-g5mb8"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.348933    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c431b721-0655-453a-b589-066502c37abc-xtables-lock\") pod \"kube-proxy-hsngj\" (UID: \"c431b721-0655-453a-b589-066502c37abc\") " pod="kube-system/kube-proxy-hsngj"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.349010    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f792c035-76d3-4389-8954-def1a475b16d-cni-cfg\") pod \"kindnet-g5mb8\" (UID: \"f792c035-76d3-4389-8954-def1a475b16d\") " pod="kube-system/kindnet-g5mb8"
	Dec 21 20:25:17 old-k8s-version-699289 kubelet[1396]: I1221 20:25:17.349054    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c431b721-0655-453a-b589-066502c37abc-lib-modules\") pod \"kube-proxy-hsngj\" (UID: \"c431b721-0655-453a-b589-066502c37abc\") " pod="kube-system/kube-proxy-hsngj"
	Dec 21 20:25:20 old-k8s-version-699289 kubelet[1396]: I1221 20:25:20.558973    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-g5mb8" podStartSLOduration=1.421999113 podCreationTimestamp="2025-12-21 20:25:17 +0000 UTC" firstStartedPulling="2025-12-21 20:25:17.563285575 +0000 UTC m=+13.210599547" lastFinishedPulling="2025-12-21 20:25:19.700198891 +0000 UTC m=+15.347512863" observedRunningTime="2025-12-21 20:25:20.558734215 +0000 UTC m=+16.206048194" watchObservedRunningTime="2025-12-21 20:25:20.558912429 +0000 UTC m=+16.206226407"
	Dec 21 20:25:20 old-k8s-version-699289 kubelet[1396]: I1221 20:25:20.559108    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hsngj" podStartSLOduration=3.559081211 podCreationTimestamp="2025-12-21 20:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:18.538688115 +0000 UTC m=+14.186002119" watchObservedRunningTime="2025-12-21 20:25:20.559081211 +0000 UTC m=+16.206395190"
	Dec 21 20:25:30 old-k8s-version-699289 kubelet[1396]: I1221 20:25:30.492757    1396 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 21 20:25:30 old-k8s-version-699289 kubelet[1396]: I1221 20:25:30.525387    1396 topology_manager.go:215] "Topology Admit Handler" podUID="f5aafc9c-4f84-4134-b0a5-878e925fefbd" podNamespace="kube-system" podName="storage-provisioner"
	Dec 21 20:25:30 old-k8s-version-699289 kubelet[1396]: I1221 20:25:30.528793    1396 topology_manager.go:215] "Topology Admit Handler" podUID="bd0c7c2b-2c82-4060-858f-e812ffc45b5e" podNamespace="kube-system" podName="coredns-5dd5756b68-v285b"
	Dec 21 20:25:30 old-k8s-version-699289 kubelet[1396]: I1221 20:25:30.639470    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwkjn\" (UniqueName: \"kubernetes.io/projected/bd0c7c2b-2c82-4060-858f-e812ffc45b5e-kube-api-access-hwkjn\") pod \"coredns-5dd5756b68-v285b\" (UID: \"bd0c7c2b-2c82-4060-858f-e812ffc45b5e\") " pod="kube-system/coredns-5dd5756b68-v285b"
	Dec 21 20:25:30 old-k8s-version-699289 kubelet[1396]: I1221 20:25:30.639539    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4t6\" (UniqueName: \"kubernetes.io/projected/f5aafc9c-4f84-4134-b0a5-878e925fefbd-kube-api-access-sj4t6\") pod \"storage-provisioner\" (UID: \"f5aafc9c-4f84-4134-b0a5-878e925fefbd\") " pod="kube-system/storage-provisioner"
	Dec 21 20:25:30 old-k8s-version-699289 kubelet[1396]: I1221 20:25:30.639572    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f5aafc9c-4f84-4134-b0a5-878e925fefbd-tmp\") pod \"storage-provisioner\" (UID: \"f5aafc9c-4f84-4134-b0a5-878e925fefbd\") " pod="kube-system/storage-provisioner"
	Dec 21 20:25:30 old-k8s-version-699289 kubelet[1396]: I1221 20:25:30.639671    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0c7c2b-2c82-4060-858f-e812ffc45b5e-config-volume\") pod \"coredns-5dd5756b68-v285b\" (UID: \"bd0c7c2b-2c82-4060-858f-e812ffc45b5e\") " pod="kube-system/coredns-5dd5756b68-v285b"
	Dec 21 20:25:31 old-k8s-version-699289 kubelet[1396]: I1221 20:25:31.562087    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-v285b" podStartSLOduration=14.562037559 podCreationTimestamp="2025-12-21 20:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:31.561818887 +0000 UTC m=+27.209132866" watchObservedRunningTime="2025-12-21 20:25:31.562037559 +0000 UTC m=+27.209351534"
	Dec 21 20:25:31 old-k8s-version-699289 kubelet[1396]: I1221 20:25:31.585141    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.585085047 podCreationTimestamp="2025-12-21 20:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:31.571984654 +0000 UTC m=+27.219298632" watchObservedRunningTime="2025-12-21 20:25:31.585085047 +0000 UTC m=+27.232399026"
	Dec 21 20:25:33 old-k8s-version-699289 kubelet[1396]: I1221 20:25:33.668971    1396 topology_manager.go:215] "Topology Admit Handler" podUID="8c49f147-ca7a-4fd1-8d64-3e54460c48f2" podNamespace="default" podName="busybox"
	Dec 21 20:25:33 old-k8s-version-699289 kubelet[1396]: I1221 20:25:33.757965    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwnq\" (UniqueName: \"kubernetes.io/projected/8c49f147-ca7a-4fd1-8d64-3e54460c48f2-kube-api-access-jqwnq\") pod \"busybox\" (UID: \"8c49f147-ca7a-4fd1-8d64-3e54460c48f2\") " pod="default/busybox"
	Dec 21 20:25:35 old-k8s-version-699289 kubelet[1396]: I1221 20:25:35.570420    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.962681503 podCreationTimestamp="2025-12-21 20:25:33 +0000 UTC" firstStartedPulling="2025-12-21 20:25:33.99059889 +0000 UTC m=+29.637912859" lastFinishedPulling="2025-12-21 20:25:34.598280667 +0000 UTC m=+30.245594637" observedRunningTime="2025-12-21 20:25:35.570300463 +0000 UTC m=+31.217614459" watchObservedRunningTime="2025-12-21 20:25:35.570363281 +0000 UTC m=+31.217677267"
	
	
	==> storage-provisioner [d6556483cd3ef3b4b1de40d43bc89fd5df7907267106428bf0a66d2e22976815] <==
	I1221 20:25:30.908757       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:25:30.920555       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:25:30.920603       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1221 20:25:30.930844       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:25:30.931006       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-699289_8e13748f-ae79-4f28-a9ea-7580787419f9!
	I1221 20:25:30.931292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b7387ac-0eac-492c-9220-7a6071dd4756", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-699289_8e13748f-ae79-4f28-a9ea-7580787419f9 became leader
	I1221 20:25:31.031621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-699289_8e13748f-ae79-4f28-a9ea-7580787419f9!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-699289 -n old-k8s-version-699289
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-699289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.14244ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:25:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-328404 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-328404 describe deploy/metrics-server -n kube-system: exit status 1 (64.939892ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-328404 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328404
helpers_test.go:244: (dbg) docker inspect no-preload-328404:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c",
	        "Created": "2025-12-21T20:24:59.700822041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321581,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:24:59.738472668Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/hosts",
	        "LogPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c-json.log",
	        "Name": "/no-preload-328404",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328404:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-328404",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c",
	                "LowerDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328404",
	                "Source": "/var/lib/docker/volumes/no-preload-328404/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328404",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328404",
	                "name.minikube.sigs.k8s.io": "no-preload-328404",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d79af2d62d2f611c8a0b3d102ebf272355063685f8d7b2585d8e2a1a22e32625",
	            "SandboxKey": "/var/run/docker/netns/d79af2d62d2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-328404": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3825326ac2cef213f4d7f258fd319688605c412ad1609130b5a218375fcefc22",
	                    "EndpointID": "e7c2e3180aca9b3d204b4cc02667ad39e79a247c76256cfded46f229af8b4f32",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "3a:21:aa:24:a6:cd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328404",
	                        "15210117610b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328404 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-328404 logs -n 25: (2.661802544s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-149976 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p old-k8s-version-699289 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo docker system info                                                                                                                                 │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cri-dockerd --version                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo containerd config dump                                                                                                                             │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo crio config                                                                                                                                        │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p bridge-149976                                                                                                                                                         │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p disable-driver-mounts-903813                                                                                                                                          │ disable-driver-mounts-903813 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:25:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:25:51.595979  339032 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:25:51.596649  339032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:51.596671  339032 out.go:374] Setting ErrFile to fd 2...
	I1221 20:25:51.596680  339032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:51.597481  339032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:25:51.598035  339032 out.go:368] Setting JSON to false
	I1221 20:25:51.599195  339032 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4101,"bootTime":1766344651,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:25:51.599290  339032 start.go:143] virtualization: kvm guest
	I1221 20:25:51.601321  339032 out.go:179] * [default-k8s-diff-port-766361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:25:51.602496  339032 notify.go:221] Checking for updates...
	I1221 20:25:51.602534  339032 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:25:51.603705  339032 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:25:51.604991  339032 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:25:51.606266  339032 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:25:51.607634  339032 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:25:51.608755  339032 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:25:51.610490  339032 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:25:51.610643  339032 config.go:182] Loaded profile config "no-preload-328404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:25:51.610773  339032 config.go:182] Loaded profile config "old-k8s-version-699289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1221 20:25:51.610888  339032 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:25:51.636595  339032 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:25:51.636746  339032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:25:51.691439  339032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-21 20:25:51.681935085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:25:51.691556  339032 docker.go:319] overlay module found
	I1221 20:25:51.694052  339032 out.go:179] * Using the docker driver based on user configuration
	W1221 20:25:47.864375  328795 node_ready.go:57] node "embed-certs-413073" has "Ready":"False" status (will retry)
	W1221 20:25:50.363495  328795 node_ready.go:57] node "embed-certs-413073" has "Ready":"False" status (will retry)
	I1221 20:25:51.695117  339032 start.go:309] selected driver: docker
	I1221 20:25:51.695129  339032 start.go:928] validating driver "docker" against <nil>
	I1221 20:25:51.695139  339032 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:25:51.695690  339032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:25:51.749134  339032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-21 20:25:51.74000806 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:25:51.749334  339032 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 20:25:51.749540  339032 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:25:51.751045  339032 out.go:179] * Using Docker driver with root privileges
	I1221 20:25:51.752065  339032 cni.go:84] Creating CNI manager for ""
	I1221 20:25:51.752118  339032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:25:51.752127  339032 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:25:51.752180  339032 start.go:353] cluster config:
	{Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:25:51.753254  339032 out.go:179] * Starting "default-k8s-diff-port-766361" primary control-plane node in "default-k8s-diff-port-766361" cluster
	I1221 20:25:51.754323  339032 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:25:51.755555  339032 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:25:51.756579  339032 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:25:51.756627  339032 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 20:25:51.756643  339032 cache.go:65] Caching tarball of preloaded images
	I1221 20:25:51.756665  339032 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:25:51.756770  339032 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:25:51.756793  339032 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 20:25:51.756885  339032 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/config.json ...
	I1221 20:25:51.756911  339032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/config.json: {Name:mk2cfb74c386d89b261631c17daa2d5e09c0157d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:25:51.776121  339032 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:25:51.776137  339032 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:25:51.776152  339032 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:25:51.776183  339032 start.go:360] acquireMachinesLock for default-k8s-diff-port-766361: {Name:mk4ee86ea8997556ea832d3122ad44701b03fc29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:25:51.776312  339032 start.go:364] duration metric: took 100.188µs to acquireMachinesLock for "default-k8s-diff-port-766361"
	I1221 20:25:51.776334  339032 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:25:51.776432  339032 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 21 20:25:42 no-preload-328404 crio[768]: time="2025-12-21T20:25:42.6063854Z" level=info msg="Starting container: 7c0bdfa1bcabe370a6ae8288e6ecb67f916e3f8eb78be15908f31721f99521b5" id=7609d26f-8f2b-4aed-8917-6542bb5d526e name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:25:42 no-preload-328404 crio[768]: time="2025-12-21T20:25:42.609639502Z" level=info msg="Started container" PID=2799 containerID=7c0bdfa1bcabe370a6ae8288e6ecb67f916e3f8eb78be15908f31721f99521b5 description=kube-system/coredns-7d764666f9-wkztz/coredns id=7609d26f-8f2b-4aed-8917-6542bb5d526e name=/runtime.v1.RuntimeService/StartContainer sandboxID=b0908890670335e7225466db1fe6a6deffe58a0459d843e5ff53c11cfe0d8c74
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.250773557Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b1b71759-d916-42fe-8802-f7789e89db9b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.25089379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.256629527Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7889c3948749212be1c4cd6f9f8563534aa172d3b0d802373cb2c652e7aab436 UID:abf67b09-143c-43b8-862d-b90cd54af971 NetNS:/var/run/netns/26761300-3a93-4578-98e5-7ceff6df4a85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000903028}] Aliases:map[]}"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.256667014Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.267658088Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7889c3948749212be1c4cd6f9f8563534aa172d3b0d802373cb2c652e7aab436 UID:abf67b09-143c-43b8-862d-b90cd54af971 NetNS:/var/run/netns/26761300-3a93-4578-98e5-7ceff6df4a85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000903028}] Aliases:map[]}"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.267861787Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.26887374Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.270076675Z" level=info msg="Ran pod sandbox 7889c3948749212be1c4cd6f9f8563534aa172d3b0d802373cb2c652e7aab436 with infra container: default/busybox/POD" id=b1b71759-d916-42fe-8802-f7789e89db9b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.271449589Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a78cc13c-7e6c-4456-a189-e0fa338dce40 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.271587469Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a78cc13c-7e6c-4456-a189-e0fa338dce40 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.271634974Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a78cc13c-7e6c-4456-a189-e0fa338dce40 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.272472828Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a567febd-0069-4802-a3b8-c7cf38de1179 name=/runtime.v1.ImageService/PullImage
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.274103435Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.91540721Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a567febd-0069-4802-a3b8-c7cf38de1179 name=/runtime.v1.ImageService/PullImage
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.916086931Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=04ca6160-5480-401b-9252-8c151b18dc54 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.918585268Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=67bcac53-69e0-4921-a967-0ec904e43536 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.923136578Z" level=info msg="Creating container: default/busybox/busybox" id=5deade0d-64e8-46ac-ae8a-32d6e342edac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.923293465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.928614715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.928990162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.966794375Z" level=info msg="Created container 6ab7f656f6254b615358b69239ab1be0625b0cc80fb9420a856ec3ef02dbee86: default/busybox/busybox" id=5deade0d-64e8-46ac-ae8a-32d6e342edac name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.967527651Z" level=info msg="Starting container: 6ab7f656f6254b615358b69239ab1be0625b0cc80fb9420a856ec3ef02dbee86" id=1d6a2054-c515-4e74-a25d-a0a02e39801a name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:25:45 no-preload-328404 crio[768]: time="2025-12-21T20:25:45.969884331Z" level=info msg="Started container" PID=2870 containerID=6ab7f656f6254b615358b69239ab1be0625b0cc80fb9420a856ec3ef02dbee86 description=default/busybox/busybox id=1d6a2054-c515-4e74-a25d-a0a02e39801a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7889c3948749212be1c4cd6f9f8563534aa172d3b0d802373cb2c652e7aab436
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6ab7f656f6254       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   7889c39487492       busybox                                     default
	7c0bdfa1bcabe       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   b090889067033       coredns-7d764666f9-wkztz                    kube-system
	293d4aa9bc1bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   7854bfeb43960       storage-provisioner                         kube-system
	6811951a611d5       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   60977abd2d3a4       kindnet-txb2h                               kube-system
	6137b5e69d8e7       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      24 seconds ago      Running             kube-proxy                0                   8caf069a7925f       kube-proxy-tnpxj                            kube-system
	359c80b1d5874       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      35 seconds ago      Running             kube-scheduler            0                   11ad3daa25ee2       kube-scheduler-no-preload-328404            kube-system
	c1f14162263e9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   0fdc043b369ae       etcd-no-preload-328404                      kube-system
	368153717b940       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      35 seconds ago      Running             kube-controller-manager   0                   a1c2405102d26       kube-controller-manager-no-preload-328404   kube-system
	b162c17a4cf9a       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      35 seconds ago      Running             kube-apiserver            0                   3c58b3c8c1abd       kube-apiserver-no-preload-328404            kube-system
	
	
	==> coredns [7c0bdfa1bcabe370a6ae8288e6ecb67f916e3f8eb78be15908f31721f99521b5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44455 - 19522 "HINFO IN 9127782026961824193.5497104470267251861. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.482137637s
	
	
	==> describe nodes <==
	Name:               no-preload-328404
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-328404
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=no-preload-328404
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-328404
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:25:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:25:42 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:25:42 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:25:42 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:25:42 +0000   Sun, 21 Dec 2025 20:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-328404
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                1bc220dc-568c-47a3-81e8-8d8a8f6c7b02
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-wkztz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-328404                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-txb2h                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-328404             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-328404    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-tnpxj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-328404             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-328404 event: Registered Node no-preload-328404 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [c1f14162263e9d6e205c7133dd94ff22b556bf59b9d40c5bab985d5310fb94a8] <==
	{"level":"info","ts":"2025-12-21T20:25:21.231646Z","caller":"traceutil/trace.go:172","msg":"trace[1093534419] transaction","detail":"{read_only:false; response_revision:46; number_of_response:1; }","duration":"265.761349ms","start":"2025-12-21T20:25:20.965871Z","end":"2025-12-21T20:25:21.231632Z","steps":["trace[1093534419] 'process raft request'  (duration: 265.696636ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:21.359827Z","caller":"traceutil/trace.go:172","msg":"trace[2066401904] linearizableReadLoop","detail":"{readStateIndex:51; appliedIndex:51; }","duration":"104.518217ms","start":"2025-12-21T20:25:21.255272Z","end":"2025-12-21T20:25:21.359790Z","steps":["trace[2066401904] 'read index received'  (duration: 104.510744ms)","trace[2066401904] 'applied index is now lower than readState.Index'  (duration: 6.13µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:25:21.415617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.325694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-21T20:25:21.415670Z","caller":"traceutil/trace.go:172","msg":"trace[200462094] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:47; }","duration":"160.398063ms","start":"2025-12-21T20:25:21.255259Z","end":"2025-12-21T20:25:21.415657Z","steps":["trace[200462094] 'agreement among raft nodes before linearized reading'  (duration: 104.650658ms)","trace[200462094] 'range keys from in-memory index tree'  (duration: 55.640335ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T20:25:21.415730Z","caller":"traceutil/trace.go:172","msg":"trace[64161728] transaction","detail":"{read_only:false; response_revision:48; number_of_response:1; }","duration":"181.917506ms","start":"2025-12-21T20:25:21.233794Z","end":"2025-12-21T20:25:21.415711Z","steps":["trace[64161728] 'process raft request'  (duration: 126.152382ms)","trace[64161728] 'compare'  (duration: 55.621753ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T20:25:21.415769Z","caller":"traceutil/trace.go:172","msg":"trace[1888655020] transaction","detail":"{read_only:false; response_revision:49; number_of_response:1; }","duration":"180.789812ms","start":"2025-12-21T20:25:21.234968Z","end":"2025-12-21T20:25:21.415758Z","steps":["trace[1888655020] 'process raft request'  (duration: 180.737746ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:25:21.415846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.471806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-21T20:25:21.415878Z","caller":"traceutil/trace.go:172","msg":"trace[2075001178] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:0; response_revision:49; }","duration":"160.513016ms","start":"2025-12-21T20:25:21.255357Z","end":"2025-12-21T20:25:21.415870Z","steps":["trace[2075001178] 'agreement among raft nodes before linearized reading'  (duration: 160.445117ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:21.564110Z","caller":"traceutil/trace.go:172","msg":"trace[1219081042] linearizableReadLoop","detail":"{readStateIndex:56; appliedIndex:56; }","duration":"125.098423ms","start":"2025-12-21T20:25:21.438988Z","end":"2025-12-21T20:25:21.564086Z","steps":["trace[1219081042] 'read index received'  (duration: 125.091365ms)","trace[1219081042] 'applied index is now lower than readState.Index'  (duration: 5.643µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:25:21.826418Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"387.403504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-21T20:25:21.826483Z","caller":"traceutil/trace.go:172","msg":"trace[60984141] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:0; response_revision:52; }","duration":"387.482555ms","start":"2025-12-21T20:25:21.438985Z","end":"2025-12-21T20:25:21.826468Z","steps":["trace[60984141] 'agreement among raft nodes before linearized reading'  (duration: 125.190179ms)","trace[60984141] 'range keys from in-memory index tree'  (duration: 262.180808ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:25:21.826513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"262.22415ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597880858029285 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/kube-scheduler\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/kube-scheduler\" value_size:617 >> failure:<>>","response":"size:14"}
	{"level":"warn","ts":"2025-12-21T20:25:21.826515Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:25:21.438976Z","time spent":"387.531777ms","remote":"127.0.0.1:57196","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 "}
	{"level":"info","ts":"2025-12-21T20:25:21.826668Z","caller":"traceutil/trace.go:172","msg":"trace[1236714367] transaction","detail":"{read_only:false; response_revision:54; number_of_response:1; }","duration":"387.599068ms","start":"2025-12-21T20:25:21.439057Z","end":"2025-12-21T20:25:21.826656Z","steps":["trace[1236714367] 'process raft request'  (duration: 387.540054ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:21.826682Z","caller":"traceutil/trace.go:172","msg":"trace[1067945978] transaction","detail":"{read_only:false; response_revision:53; number_of_response:1; }","duration":"388.117308ms","start":"2025-12-21T20:25:21.438545Z","end":"2025-12-21T20:25:21.826662Z","steps":["trace[1067945978] 'process raft request'  (duration: 125.697006ms)","trace[1067945978] 'compare'  (duration: 262.114338ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T20:25:21.826705Z","caller":"traceutil/trace.go:172","msg":"trace[1547641233] linearizableReadLoop","detail":"{readStateIndex:57; appliedIndex:56; }","duration":"262.527616ms","start":"2025-12-21T20:25:21.564166Z","end":"2025-12-21T20:25:21.826694Z","steps":["trace[1547641233] 'read index received'  (duration: 148.172228ms)","trace[1547641233] 'applied index is now lower than readState.Index'  (duration: 114.353986ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-21T20:25:21.826719Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:25:21.439045Z","time spent":"387.648457ms","remote":"127.0.0.1:57236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":464,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>"}
	{"level":"warn","ts":"2025-12-21T20:25:21.826753Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:25:21.438533Z","time spent":"388.188123ms","remote":"127.0.0.1:57346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":661,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/kube-scheduler\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/kube-scheduler\" value_size:617 >> failure:<>"}
	{"level":"warn","ts":"2025-12-21T20:25:21.826820Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"386.89274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/probes\" limit:1 ","response":"range_response_count:1 size:1056"}
	{"level":"info","ts":"2025-12-21T20:25:21.826847Z","caller":"traceutil/trace.go:172","msg":"trace[1828995889] range","detail":"{range_begin:/registry/flowschemas/probes; range_end:; response_count:1; response_revision:54; }","duration":"386.921733ms","start":"2025-12-21T20:25:21.439918Z","end":"2025-12-21T20:25:21.826840Z","steps":["trace[1828995889] 'agreement among raft nodes before linearized reading'  (duration: 386.808393ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-21T20:25:21.826865Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-21T20:25:21.439909Z","time spent":"386.951704ms","remote":"127.0.0.1:57346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":1,"response size":1079,"request content":"key:\"/registry/flowschemas/probes\" limit:1 "}
	{"level":"warn","ts":"2025-12-21T20:25:21.826889Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"299.352752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-21T20:25:21.826922Z","caller":"traceutil/trace.go:172","msg":"trace[989158700] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:54; }","duration":"299.386377ms","start":"2025-12-21T20:25:21.527528Z","end":"2025-12-21T20:25:21.826914Z","steps":["trace[989158700] 'agreement among raft nodes before linearized reading'  (duration: 299.333263ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:22.168067Z","caller":"traceutil/trace.go:172","msg":"trace[1001471000] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"158.695899ms","start":"2025-12-21T20:25:22.009355Z","end":"2025-12-21T20:25:22.168051Z","steps":["trace[1001471000] 'process raft request'  (duration: 158.663243ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:25:22.168139Z","caller":"traceutil/trace.go:172","msg":"trace[1351837987] transaction","detail":"{read_only:false; response_revision:70; number_of_response:1; }","duration":"158.824415ms","start":"2025-12-21T20:25:22.009295Z","end":"2025-12-21T20:25:22.168119Z","steps":["trace[1351837987] 'process raft request'  (duration: 124.203605ms)","trace[1351837987] 'compare'  (duration: 34.427615ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:25:54 up  1:08,  0 user,  load average: 3.67, 3.72, 2.63
	Linux no-preload-328404 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6811951a611d5c6de1948c779b56a106ae5c4a6f24133efb275b2beaae9584cf] <==
	I1221 20:25:31.711320       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:25:31.711616       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1221 20:25:31.711780       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:25:31.711806       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:25:31.711831       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:25:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:25:31.914639       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:25:31.914715       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:25:31.914727       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:25:32.006884       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:25:32.206519       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:25:32.206549       1 metrics.go:72] Registering metrics
	I1221 20:25:32.206676       1 controller.go:711] "Syncing nftables rules"
	I1221 20:25:41.915731       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:25:41.915851       1 main.go:301] handling current node
	I1221 20:25:51.918329       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:25:51.918357       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b162c17a4cf9a70284a97fed0d1eaba1ea24f0a5a1f3955f45fb669eca235bfb] <==
	E1221 20:25:20.428634       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	E1221 20:25:20.560461       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1221 20:25:20.561923       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:20.562138       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1221 20:25:20.614288       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:20.615051       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1221 20:25:20.821936       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:25:21.436543       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1221 20:25:21.827849       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1221 20:25:21.827873       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:25:22.740969       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:25:22.790432       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:25:22.936079       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 20:25:22.943758       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1221 20:25:22.945193       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:25:22.950388       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:25:23.297847       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:25:24.064067       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:25:24.077928       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 20:25:24.101395       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:25:28.951646       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:28.956819       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:29.201027       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:25:29.299692       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1221 20:25:53.045297       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:45508: use of closed network connection
	
	
	==> kube-controller-manager [368153717b94036e98355f820b29a1ed988080ae3e0b213ac5a9f6a414df7dcd] <==
	I1221 20:25:28.102656       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.102796       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1221 20:25:28.101613       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.102893       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-328404"
	I1221 20:25:28.102905       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.102962       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1221 20:25:28.101650       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.103041       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.103112       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.103173       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.103181       1 range_allocator.go:177] "Sending events to api server"
	I1221 20:25:28.103504       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1221 20:25:28.103562       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:25:28.103590       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.103692       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.102802       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.101670       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.106087       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:25:28.114044       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.116592       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-328404" podCIDRs=["10.244.0.0/24"]
	I1221 20:25:28.202037       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:28.202057       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:25:28.202064       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:25:28.206289       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:43.104481       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [6137b5e69d8e7ce42cb64e0e29ec65030a882792e46d79860824a8ada1875e7b] <==
	I1221 20:25:29.739528       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:25:29.814131       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:25:29.914699       1 shared_informer.go:377] "Caches are synced"
	I1221 20:25:29.914736       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1221 20:25:29.914820       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:25:29.933718       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:25:29.933784       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:25:29.938993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:25:29.939398       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:25:29.939424       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:25:29.942099       1 config.go:200] "Starting service config controller"
	I1221 20:25:29.942121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:25:29.942477       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:25:29.942498       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:25:29.942502       1 config.go:309] "Starting node config controller"
	I1221 20:25:29.942513       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:25:29.942520       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:25:29.942514       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:25:29.942540       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:25:30.042668       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:25:30.042754       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:25:30.042789       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [359c80b1d5874523e7bfbb060f6ef9574fcd3bff761205822fd32e4c42e504d2] <==
	E1221 20:25:20.317087       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1221 20:25:20.317151       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1221 20:25:20.317276       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1221 20:25:20.317337       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1221 20:25:20.317673       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1221 20:25:21.134384       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1221 20:25:21.224071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1221 20:25:21.353774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1221 20:25:21.369746       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1221 20:25:21.375548       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1221 20:25:21.389800       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1221 20:25:21.390380       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1221 20:25:21.442007       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1221 20:25:21.479305       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1221 20:25:21.487460       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1221 20:25:21.568106       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1221 20:25:21.568106       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1221 20:25:21.581273       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1221 20:25:21.672932       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1221 20:25:21.756713       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1221 20:25:21.817148       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1221 20:25:21.818820       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1221 20:25:21.896184       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1221 20:25:21.903873       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1221 20:25:24.108723       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 21 20:25:29 no-preload-328404 kubelet[2201]: I1221 20:25:29.369562    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff8c4aab-19f6-4e7d-9f4f-e3e499a57017-lib-modules\") pod \"kindnet-txb2h\" (UID: \"ff8c4aab-19f6-4e7d-9f4f-e3e499a57017\") " pod="kube-system/kindnet-txb2h"
	Dec 21 20:25:29 no-preload-328404 kubelet[2201]: I1221 20:25:29.369639    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/81c501a3-fe67-425e-b459-5d9e8783d67e-kube-proxy\") pod \"kube-proxy-tnpxj\" (UID: \"81c501a3-fe67-425e-b459-5d9e8783d67e\") " pod="kube-system/kube-proxy-tnpxj"
	Dec 21 20:25:29 no-preload-328404 kubelet[2201]: I1221 20:25:29.369697    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81c501a3-fe67-425e-b459-5d9e8783d67e-xtables-lock\") pod \"kube-proxy-tnpxj\" (UID: \"81c501a3-fe67-425e-b459-5d9e8783d67e\") " pod="kube-system/kube-proxy-tnpxj"
	Dec 21 20:25:29 no-preload-328404 kubelet[2201]: I1221 20:25:29.369756    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ff8c4aab-19f6-4e7d-9f4f-e3e499a57017-cni-cfg\") pod \"kindnet-txb2h\" (UID: \"ff8c4aab-19f6-4e7d-9f4f-e3e499a57017\") " pod="kube-system/kindnet-txb2h"
	Dec 21 20:25:29 no-preload-328404 kubelet[2201]: I1221 20:25:29.369838    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh66n\" (UniqueName: \"kubernetes.io/projected/ff8c4aab-19f6-4e7d-9f4f-e3e499a57017-kube-api-access-sh66n\") pod \"kindnet-txb2h\" (UID: \"ff8c4aab-19f6-4e7d-9f4f-e3e499a57017\") " pod="kube-system/kindnet-txb2h"
	Dec 21 20:25:29 no-preload-328404 kubelet[2201]: E1221 20:25:29.607740    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-328404" containerName="kube-scheduler"
	Dec 21 20:25:30 no-preload-328404 kubelet[2201]: E1221 20:25:30.086342    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-328404" containerName="etcd"
	Dec 21 20:25:30 no-preload-328404 kubelet[2201]: I1221 20:25:30.100657    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tnpxj" podStartSLOduration=1.100638624 podStartE2EDuration="1.100638624s" podCreationTimestamp="2025-12-21 20:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:30.019929569 +0000 UTC m=+6.205046765" watchObservedRunningTime="2025-12-21 20:25:30.100638624 +0000 UTC m=+6.285755799"
	Dec 21 20:25:30 no-preload-328404 kubelet[2201]: E1221 20:25:30.166623    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-328404" containerName="kube-apiserver"
	Dec 21 20:25:32 no-preload-328404 kubelet[2201]: I1221 20:25:32.025416    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-txb2h" podStartSLOduration=1.177332666 podStartE2EDuration="3.025396663s" podCreationTimestamp="2025-12-21 20:25:29 +0000 UTC" firstStartedPulling="2025-12-21 20:25:29.643650399 +0000 UTC m=+5.828767565" lastFinishedPulling="2025-12-21 20:25:31.491714391 +0000 UTC m=+7.676831562" observedRunningTime="2025-12-21 20:25:32.025320377 +0000 UTC m=+8.210437551" watchObservedRunningTime="2025-12-21 20:25:32.025396663 +0000 UTC m=+8.210513836"
	Dec 21 20:25:39 no-preload-328404 kubelet[2201]: E1221 20:25:39.243105    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-328404" containerName="kube-controller-manager"
	Dec 21 20:25:39 no-preload-328404 kubelet[2201]: E1221 20:25:39.612459    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-328404" containerName="kube-scheduler"
	Dec 21 20:25:40 no-preload-328404 kubelet[2201]: E1221 20:25:40.088165    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-328404" containerName="etcd"
	Dec 21 20:25:40 no-preload-328404 kubelet[2201]: E1221 20:25:40.173045    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-328404" containerName="kube-apiserver"
	Dec 21 20:25:42 no-preload-328404 kubelet[2201]: I1221 20:25:42.206213    2201 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 21 20:25:42 no-preload-328404 kubelet[2201]: I1221 20:25:42.257960    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c790011a-9ad3-4344-b9ec-e5f3cfba2f21-config-volume\") pod \"coredns-7d764666f9-wkztz\" (UID: \"c790011a-9ad3-4344-b9ec-e5f3cfba2f21\") " pod="kube-system/coredns-7d764666f9-wkztz"
	Dec 21 20:25:42 no-preload-328404 kubelet[2201]: I1221 20:25:42.258210    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdw2v\" (UniqueName: \"kubernetes.io/projected/c790011a-9ad3-4344-b9ec-e5f3cfba2f21-kube-api-access-mdw2v\") pod \"coredns-7d764666f9-wkztz\" (UID: \"c790011a-9ad3-4344-b9ec-e5f3cfba2f21\") " pod="kube-system/coredns-7d764666f9-wkztz"
	Dec 21 20:25:42 no-preload-328404 kubelet[2201]: I1221 20:25:42.258325    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e9e0ecd-7bb1-456d-97d6-436ccd273c6a-tmp\") pod \"storage-provisioner\" (UID: \"3e9e0ecd-7bb1-456d-97d6-436ccd273c6a\") " pod="kube-system/storage-provisioner"
	Dec 21 20:25:42 no-preload-328404 kubelet[2201]: I1221 20:25:42.258406    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvtx8\" (UniqueName: \"kubernetes.io/projected/3e9e0ecd-7bb1-456d-97d6-436ccd273c6a-kube-api-access-gvtx8\") pod \"storage-provisioner\" (UID: \"3e9e0ecd-7bb1-456d-97d6-436ccd273c6a\") " pod="kube-system/storage-provisioner"
	Dec 21 20:25:43 no-preload-328404 kubelet[2201]: E1221 20:25:43.041595    2201 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wkztz" containerName="coredns"
	Dec 21 20:25:43 no-preload-328404 kubelet[2201]: I1221 20:25:43.052407    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.052388101 podStartE2EDuration="14.052388101s" podCreationTimestamp="2025-12-21 20:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:43.052036157 +0000 UTC m=+19.237153331" watchObservedRunningTime="2025-12-21 20:25:43.052388101 +0000 UTC m=+19.237505275"
	Dec 21 20:25:43 no-preload-328404 kubelet[2201]: I1221 20:25:43.068013    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wkztz" podStartSLOduration=14.067994764 podStartE2EDuration="14.067994764s" podCreationTimestamp="2025-12-21 20:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:43.067496571 +0000 UTC m=+19.252613745" watchObservedRunningTime="2025-12-21 20:25:43.067994764 +0000 UTC m=+19.253111939"
	Dec 21 20:25:44 no-preload-328404 kubelet[2201]: E1221 20:25:44.043307    2201 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wkztz" containerName="coredns"
	Dec 21 20:25:44 no-preload-328404 kubelet[2201]: I1221 20:25:44.976909    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bw7\" (UniqueName: \"kubernetes.io/projected/abf67b09-143c-43b8-862d-b90cd54af971-kube-api-access-s9bw7\") pod \"busybox\" (UID: \"abf67b09-143c-43b8-862d-b90cd54af971\") " pod="default/busybox"
	Dec 21 20:25:45 no-preload-328404 kubelet[2201]: E1221 20:25:45.045758    2201 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wkztz" containerName="coredns"
	
	
	==> storage-provisioner [293d4aa9bc1bcd7fc25d0a0380b54705a561a0402328ae17b9cd5c3086d32ac2] <==
	I1221 20:25:42.614806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:25:42.623832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:25:42.623894       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:25:42.626389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:42.634527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:25:42.634733       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:25:42.634940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-328404_b1010b9e-0140-43a2-9e79-6b0d461d2b7e!
	I1221 20:25:42.634973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf33a741-9273-4d62-a26d-92d41502a937", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-328404_b1010b9e-0140-43a2-9e79-6b0d461d2b7e became leader
	W1221 20:25:42.638035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:42.643575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:25:42.736218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-328404_b1010b9e-0140-43a2-9e79-6b0d461d2b7e!
	W1221 20:25:44.647185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:44.651974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:46.655385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:46.659904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:48.663541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:48.668025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:50.670820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:50.745926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:52.749991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:52.754356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:54.757754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:54.821714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328404 -n no-preload-328404
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-328404 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (355.64848ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-413073 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-413073 describe deploy/metrics-server -n kube-system: exit status 1 (86.535151ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-413073 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-413073
helpers_test.go:244: (dbg) docker inspect embed-certs-413073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9",
	        "Created": "2025-12-21T20:25:22.363216828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329925,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:25:22.401447676Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/hostname",
	        "HostsPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/hosts",
	        "LogPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9-json.log",
	        "Name": "/embed-certs-413073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-413073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-413073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9",
	                "LowerDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-413073",
	                "Source": "/var/lib/docker/volumes/embed-certs-413073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-413073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-413073",
	                "name.minikube.sigs.k8s.io": "embed-certs-413073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e1371d09464e933be0941e9a82551ccf5d21972177bd22561ca57c1b25973b4c",
	            "SandboxKey": "/var/run/docker/netns/e1371d09464e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-413073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4158e54948a98ff7a88de94749c8958f71f898a500c109dd7a967015c32451c6",
	                    "EndpointID": "09b93cf16e5a96ce9eecf51eed4554d6b35f9f61c9314bc5f773f02cf9ded9fb",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1a:1b:bc:3b:22:26",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-413073",
	                        "885ba42913bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-413073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-413073 logs -n 25: (1.027576584s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-149976 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo docker system info                                                                                                                                                                                                      │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo containerd config dump                                                                                                                                                                                                  │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo crio config                                                                                                                                                                                                             │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p bridge-149976                                                                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p disable-driver-mounts-903813                                                                                                                                                                                                               │ disable-driver-mounts-903813 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p no-preload-328404 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:25:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:25:59.356131  341446 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:25:59.356247  341446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:59.356260  341446 out.go:374] Setting ErrFile to fd 2...
	I1221 20:25:59.356266  341446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:25:59.356515  341446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:25:59.356947  341446 out.go:368] Setting JSON to false
	I1221 20:25:59.358153  341446 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4108,"bootTime":1766344651,"procs":361,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:25:59.358209  341446 start.go:143] virtualization: kvm guest
	I1221 20:25:59.359897  341446 out.go:179] * [old-k8s-version-699289] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:25:59.361087  341446 notify.go:221] Checking for updates...
	I1221 20:25:59.361122  341446 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:25:59.362410  341446 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:25:59.363574  341446 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:25:59.364802  341446 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:25:59.365936  341446 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:25:59.367061  341446 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:25:59.368675  341446 config.go:182] Loaded profile config "old-k8s-version-699289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1221 20:25:59.370389  341446 out.go:179] * Kubernetes 1.34.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.3
	I1221 20:25:59.371551  341446 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:25:59.394018  341446 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:25:59.394105  341446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:25:59.449074  341446 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-21 20:25:59.439742993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:25:59.449257  341446 docker.go:319] overlay module found
	I1221 20:25:59.450863  341446 out.go:179] * Using the docker driver based on existing profile
	I1221 20:25:59.452076  341446 start.go:309] selected driver: docker
	I1221 20:25:59.452089  341446 start.go:928] validating driver "docker" against &{Name:old-k8s-version-699289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-699289 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:25:59.452167  341446 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:25:59.452719  341446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:25:59.505575  341446 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:83 SystemTime:2025-12-21 20:25:59.495995271 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:25:59.505931  341446 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:25:59.505967  341446 cni.go:84] Creating CNI manager for ""
	I1221 20:25:59.506036  341446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:25:59.506087  341446 start.go:353] cluster config:
	{Name:old-k8s-version-699289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-699289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:25:59.507894  341446 out.go:179] * Starting "old-k8s-version-699289" primary control-plane node in "old-k8s-version-699289" cluster
	I1221 20:25:59.509103  341446 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:25:59.510303  341446 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:25:59.511595  341446 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1221 20:25:59.511623  341446 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1221 20:25:59.511630  341446 cache.go:65] Caching tarball of preloaded images
	I1221 20:25:59.511685  341446 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:25:59.511707  341446 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:25:59.511715  341446 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1221 20:25:59.511808  341446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/config.json ...
	I1221 20:25:59.530890  341446 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:25:59.530910  341446 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:25:59.530926  341446 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:25:59.530951  341446 start.go:360] acquireMachinesLock for old-k8s-version-699289: {Name:mk918761c9c3626149715adfeb92f77b374f2e38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:25:59.531008  341446 start.go:364] duration metric: took 38.455µs to acquireMachinesLock for "old-k8s-version-699289"
	I1221 20:25:59.531029  341446 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:25:59.531035  341446 fix.go:54] fixHost starting: 
	I1221 20:25:59.531314  341446 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:25:59.547711  341446 fix.go:112] recreateIfNeeded on old-k8s-version-699289: state=Stopped err=<nil>
	W1221 20:25:59.547734  341446 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 20:25:58.363559  328795 node_ready.go:49] node "embed-certs-413073" is "Ready"
	I1221 20:25:58.363598  328795 node_ready.go:38] duration metric: took 12.502755225s for node "embed-certs-413073" to be "Ready" ...
	I1221 20:25:58.363616  328795 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:25:58.363666  328795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:25:58.376787  328795 api_server.go:72] duration metric: took 12.835485702s to wait for apiserver process to appear ...
	I1221 20:25:58.376810  328795 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:25:58.376827  328795 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:25:58.382134  328795 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1221 20:25:58.382986  328795 api_server.go:141] control plane version: v1.34.3
	I1221 20:25:58.383015  328795 api_server.go:131] duration metric: took 6.197548ms to wait for apiserver health ...
	I1221 20:25:58.383025  328795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:25:58.386459  328795 system_pods.go:59] 8 kube-system pods found
	I1221 20:25:58.386494  328795 system_pods.go:61] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:25:58.386503  328795 system_pods.go:61] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running
	I1221 20:25:58.386513  328795 system_pods.go:61] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running
	I1221 20:25:58.386520  328795 system_pods.go:61] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running
	I1221 20:25:58.386531  328795 system_pods.go:61] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running
	I1221 20:25:58.386537  328795 system_pods.go:61] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running
	I1221 20:25:58.386546  328795 system_pods.go:61] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running
	I1221 20:25:58.386553  328795 system_pods.go:61] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:25:58.386564  328795 system_pods.go:74] duration metric: took 3.531264ms to wait for pod list to return data ...
	I1221 20:25:58.386574  328795 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:25:58.389345  328795 default_sa.go:45] found service account: "default"
	I1221 20:25:58.389378  328795 default_sa.go:55] duration metric: took 2.797019ms for default service account to be created ...
	I1221 20:25:58.389387  328795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:25:58.391996  328795 system_pods.go:86] 8 kube-system pods found
	I1221 20:25:58.392021  328795 system_pods.go:89] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:25:58.392027  328795 system_pods.go:89] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running
	I1221 20:25:58.392033  328795 system_pods.go:89] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running
	I1221 20:25:58.392036  328795 system_pods.go:89] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running
	I1221 20:25:58.392040  328795 system_pods.go:89] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running
	I1221 20:25:58.392043  328795 system_pods.go:89] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running
	I1221 20:25:58.392047  328795 system_pods.go:89] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running
	I1221 20:25:58.392051  328795 system_pods.go:89] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:25:58.392073  328795 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1221 20:25:58.649279  328795 system_pods.go:86] 8 kube-system pods found
	I1221 20:25:58.649309  328795 system_pods.go:89] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:25:58.649316  328795 system_pods.go:89] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running
	I1221 20:25:58.649322  328795 system_pods.go:89] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running
	I1221 20:25:58.649328  328795 system_pods.go:89] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running
	I1221 20:25:58.649332  328795 system_pods.go:89] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running
	I1221 20:25:58.649339  328795 system_pods.go:89] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running
	I1221 20:25:58.649342  328795 system_pods.go:89] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running
	I1221 20:25:58.649347  328795 system_pods.go:89] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:25:58.947102  328795 system_pods.go:86] 8 kube-system pods found
	I1221 20:25:58.947144  328795 system_pods.go:89] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Running
	I1221 20:25:58.947153  328795 system_pods.go:89] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running
	I1221 20:25:58.947158  328795 system_pods.go:89] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running
	I1221 20:25:58.947164  328795 system_pods.go:89] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running
	I1221 20:25:58.947172  328795 system_pods.go:89] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running
	I1221 20:25:58.947177  328795 system_pods.go:89] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running
	I1221 20:25:58.947182  328795 system_pods.go:89] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running
	I1221 20:25:58.947187  328795 system_pods.go:89] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Running
	I1221 20:25:58.947203  328795 system_pods.go:126] duration metric: took 557.808354ms to wait for k8s-apps to be running ...
	I1221 20:25:58.947214  328795 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:25:58.947283  328795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:25:58.960659  328795 system_svc.go:56] duration metric: took 13.425765ms WaitForService to wait for kubelet
	I1221 20:25:58.960686  328795 kubeadm.go:587] duration metric: took 13.419388836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:25:58.960703  328795 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:25:58.962959  328795 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:25:58.962981  328795 node_conditions.go:123] node cpu capacity is 8
	I1221 20:25:58.962998  328795 node_conditions.go:105] duration metric: took 2.289673ms to run NodePressure ...
	I1221 20:25:58.963012  328795 start.go:242] waiting for startup goroutines ...
	I1221 20:25:58.963026  328795 start.go:247] waiting for cluster config update ...
	I1221 20:25:58.963040  328795 start.go:256] writing updated cluster config ...
	I1221 20:25:58.963306  328795 ssh_runner.go:195] Run: rm -f paused
	I1221 20:25:58.966780  328795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:25:59.046912  328795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lvwlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.051800  328795 pod_ready.go:94] pod "coredns-66bc5c9577-lvwlf" is "Ready"
	I1221 20:25:59.051824  328795 pod_ready.go:86] duration metric: took 4.882934ms for pod "coredns-66bc5c9577-lvwlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.053837  328795 pod_ready.go:83] waiting for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.058075  328795 pod_ready.go:94] pod "etcd-embed-certs-413073" is "Ready"
	I1221 20:25:59.058098  328795 pod_ready.go:86] duration metric: took 4.233721ms for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.060034  328795 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.064102  328795 pod_ready.go:94] pod "kube-apiserver-embed-certs-413073" is "Ready"
	I1221 20:25:59.064123  328795 pod_ready.go:86] duration metric: took 4.069528ms for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.066054  328795 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.370883  328795 pod_ready.go:94] pod "kube-controller-manager-embed-certs-413073" is "Ready"
	I1221 20:25:59.370910  328795 pod_ready.go:86] duration metric: took 304.835247ms for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.571106  328795 pod_ready.go:83] waiting for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:25:59.971161  328795 pod_ready.go:94] pod "kube-proxy-qvdzm" is "Ready"
	I1221 20:25:59.971189  328795 pod_ready.go:86] duration metric: took 400.056954ms for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:00.171093  328795 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:00.570720  328795 pod_ready.go:94] pod "kube-scheduler-embed-certs-413073" is "Ready"
	I1221 20:26:00.570743  328795 pod_ready.go:86] duration metric: took 399.628115ms for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:00.570754  328795 pod_ready.go:40] duration metric: took 1.603948432s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:00.616111  328795 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:26:00.617715  328795 out.go:179] * Done! kubectl is now configured to use "embed-certs-413073" cluster and "default" namespace by default
	I1221 20:25:56.672390  339032 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Running}}
	I1221 20:25:56.693493  339032 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:25:56.714870  339032 cli_runner.go:164] Run: docker exec default-k8s-diff-port-766361 stat /var/lib/dpkg/alternatives/iptables
	I1221 20:25:56.761138  339032 oci.go:144] the created container "default-k8s-diff-port-766361" has a running status.
	I1221 20:25:56.761189  339032 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa...
	I1221 20:25:56.792605  339032 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 20:25:56.819056  339032 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:25:56.847973  339032 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 20:25:56.847992  339032 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-766361 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 20:25:56.898012  339032 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:25:56.921803  339032 machine.go:94] provisionDockerMachine start ...
	I1221 20:25:56.922006  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:25:56.946891  339032 main.go:144] libmachine: Using SSH client type: native
	I1221 20:25:56.947271  339032 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1221 20:25:56.947293  339032 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:25:56.948086  339032 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44908->127.0.0.1:33109: read: connection reset by peer
	I1221 20:26:00.088845  339032 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:26:00.088870  339032 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-766361"
	I1221 20:26:00.088940  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:00.106112  339032 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:00.106411  339032 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1221 20:26:00.106434  339032 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766361 && echo "default-k8s-diff-port-766361" | sudo tee /etc/hostname
	I1221 20:26:00.252294  339032 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:26:00.252373  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:00.269778  339032 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:00.270024  339032 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1221 20:26:00.270047  339032 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766361/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:26:00.403662  339032 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:26:00.403689  339032 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:26:00.403719  339032 ubuntu.go:190] setting up certificates
	I1221 20:26:00.403733  339032 provision.go:84] configureAuth start
	I1221 20:26:00.403782  339032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:26:00.422138  339032 provision.go:143] copyHostCerts
	I1221 20:26:00.422197  339032 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:26:00.422205  339032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:26:00.422307  339032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:26:00.422410  339032 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:26:00.422420  339032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:26:00.422451  339032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:26:00.422503  339032 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:26:00.422513  339032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:26:00.422535  339032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:26:00.422579  339032 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766361 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-766361 localhost minikube]
	I1221 20:26:00.452864  339032 provision.go:177] copyRemoteCerts
	I1221 20:26:00.452913  339032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:26:00.452958  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:00.469530  339032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:26:00.567042  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:26:00.587068  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1221 20:26:00.604672  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 20:26:00.623354  339032 provision.go:87] duration metric: took 219.609948ms to configureAuth
	I1221 20:26:00.623380  339032 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:26:00.623561  339032 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:00.623671  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:00.647363  339032 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:00.647578  339032 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1221 20:26:00.647598  339032 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:26:00.921355  339032 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:26:00.921387  339032 machine.go:97] duration metric: took 3.999557809s to provisionDockerMachine
	I1221 20:26:00.921402  339032 client.go:176] duration metric: took 9.142991568s to LocalClient.Create
	I1221 20:26:00.921426  339032 start.go:167] duration metric: took 9.143052646s to libmachine.API.Create "default-k8s-diff-port-766361"
	I1221 20:26:00.921450  339032 start.go:293] postStartSetup for "default-k8s-diff-port-766361" (driver="docker")
	I1221 20:26:00.921467  339032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:26:00.921553  339032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:26:00.921605  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:00.938973  339032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:26:01.036998  339032 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:26:01.040486  339032 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:26:01.040510  339032 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:26:01.040520  339032 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:26:01.040585  339032 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:26:01.040688  339032 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:26:01.040811  339032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:26:01.048010  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:01.067897  339032 start.go:296] duration metric: took 146.422827ms for postStartSetup
	I1221 20:26:01.068313  339032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:26:01.089480  339032 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/config.json ...
	I1221 20:26:01.089741  339032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:26:01.089784  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:01.106539  339032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:26:01.199919  339032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:26:01.204249  339032 start.go:128] duration metric: took 9.427781854s to createHost
	I1221 20:26:01.204275  339032 start.go:83] releasing machines lock for "default-k8s-diff-port-766361", held for 9.427951346s
	I1221 20:26:01.204346  339032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:26:01.221519  339032 ssh_runner.go:195] Run: cat /version.json
	I1221 20:26:01.221561  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:01.221564  339032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:26:01.221622  339032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:26:01.240847  339032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:26:01.240866  339032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:26:01.334439  339032 ssh_runner.go:195] Run: systemctl --version
	I1221 20:26:01.392154  339032 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:26:01.427458  339032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:26:01.432201  339032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:26:01.432292  339032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:26:01.457808  339032 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 20:26:01.457832  339032 start.go:496] detecting cgroup driver to use...
	I1221 20:26:01.457868  339032 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:26:01.457915  339032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:26:01.473626  339032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:26:01.486084  339032 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:26:01.486133  339032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:26:01.502024  339032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:26:01.518667  339032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:26:01.615504  339032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:26:01.719625  339032 docker.go:234] disabling docker service ...
	I1221 20:26:01.719710  339032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:26:01.740284  339032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:26:01.754576  339032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:26:01.837847  339032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:26:01.918602  339032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:26:01.932577  339032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:26:01.947204  339032 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:26:01.947297  339032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:01.957025  339032 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:26:01.957078  339032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:01.965538  339032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:01.973807  339032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:01.982056  339032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:26:01.989669  339032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:01.997931  339032 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:02.010894  339032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:02.018950  339032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:26:02.025667  339032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:26:02.032554  339032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:02.111338  339032 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:26:02.250078  339032 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:26:02.250150  339032 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:26:02.254146  339032 start.go:564] Will wait 60s for crictl version
	I1221 20:26:02.254218  339032 ssh_runner.go:195] Run: which crictl
	I1221 20:26:02.257654  339032 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:26:02.280501  339032 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:26:02.280583  339032 ssh_runner.go:195] Run: crio --version
	I1221 20:26:02.308892  339032 ssh_runner.go:195] Run: crio --version
	I1221 20:26:02.337579  339032 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:25:59.549401  341446 out.go:252] * Restarting existing docker container for "old-k8s-version-699289" ...
	I1221 20:25:59.549459  341446 cli_runner.go:164] Run: docker start old-k8s-version-699289
	I1221 20:25:59.790823  341446 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:25:59.808681  341446 kic.go:430] container "old-k8s-version-699289" state is running.
	I1221 20:25:59.808992  341446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-699289
	I1221 20:25:59.827830  341446 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/config.json ...
	I1221 20:25:59.828052  341446 machine.go:94] provisionDockerMachine start ...
	I1221 20:25:59.828126  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:25:59.847369  341446 main.go:144] libmachine: Using SSH client type: native
	I1221 20:25:59.847613  341446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1221 20:25:59.847630  341446 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:25:59.848198  341446 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51030->127.0.0.1:33114: read: connection reset by peer
	I1221 20:26:02.983816  341446 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-699289
	
	I1221 20:26:02.983843  341446 ubuntu.go:182] provisioning hostname "old-k8s-version-699289"
	I1221 20:26:02.983947  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:03.002965  341446 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:03.003255  341446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1221 20:26:03.003292  341446 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-699289 && echo "old-k8s-version-699289" | sudo tee /etc/hostname
	I1221 20:26:03.147449  341446 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-699289
	
	I1221 20:26:03.147524  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:03.165836  341446 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:03.166052  341446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1221 20:26:03.166081  341446 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-699289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-699289/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-699289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:26:03.303293  341446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:26:03.303321  341446 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:26:03.303355  341446 ubuntu.go:190] setting up certificates
	I1221 20:26:03.303368  341446 provision.go:84] configureAuth start
	I1221 20:26:03.303421  341446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-699289
	I1221 20:26:03.322047  341446 provision.go:143] copyHostCerts
	I1221 20:26:03.322104  341446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:26:03.322119  341446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:26:03.322187  341446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:26:03.322327  341446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:26:03.322337  341446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:26:03.322368  341446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:26:03.322434  341446 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:26:03.322442  341446 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:26:03.322466  341446 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:26:03.322515  341446 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-699289 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-699289]
	I1221 20:26:03.390252  341446 provision.go:177] copyRemoteCerts
	I1221 20:26:03.390314  341446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:26:03.390368  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:03.408905  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:03.509272  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1221 20:26:03.527429  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:26:03.545577  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:26:03.564372  341446 provision.go:87] duration metric: took 260.990157ms to configureAuth
	I1221 20:26:03.564399  341446 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:26:03.564598  341446 config.go:182] Loaded profile config "old-k8s-version-699289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1221 20:26:03.564709  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:03.583161  341446 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:03.583402  341446 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1221 20:26:03.583423  341446 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:26:03.908648  341446 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:26:03.908679  341446 machine.go:97] duration metric: took 4.08061186s to provisionDockerMachine
	I1221 20:26:03.908695  341446 start.go:293] postStartSetup for "old-k8s-version-699289" (driver="docker")
	I1221 20:26:03.908709  341446 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:26:03.908799  341446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:26:03.908861  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:03.927867  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:04.024020  341446 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:26:04.027438  341446 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:26:04.027471  341446 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:26:04.027482  341446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:26:04.027535  341446 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:26:04.027605  341446 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:26:04.027689  341446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:26:04.035101  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:04.051437  341446 start.go:296] duration metric: took 142.721862ms for postStartSetup
	I1221 20:26:04.051511  341446 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:26:04.051556  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:04.069277  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:04.162811  341446 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:26:04.167297  341446 fix.go:56] duration metric: took 4.636257953s for fixHost
	I1221 20:26:04.167324  341446 start.go:83] releasing machines lock for "old-k8s-version-699289", held for 4.636302697s
	I1221 20:26:04.167384  341446 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-699289
	I1221 20:26:04.186403  341446 ssh_runner.go:195] Run: cat /version.json
	I1221 20:26:04.186473  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:04.186515  341446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:26:04.186585  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:04.207774  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:04.210376  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:04.305388  341446 ssh_runner.go:195] Run: systemctl --version
	I1221 20:26:04.389276  341446 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:26:04.429677  341446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:26:04.435083  341446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:26:04.435152  341446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:26:04.444000  341446 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:26:04.444023  341446 start.go:496] detecting cgroup driver to use...
	I1221 20:26:04.444052  341446 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:26:04.444095  341446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:26:04.459669  341446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:26:04.473377  341446 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:26:04.473432  341446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:26:04.490294  341446 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:26:04.504312  341446 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:26:04.588934  341446 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:26:04.673748  341446 docker.go:234] disabling docker service ...
	I1221 20:26:04.673825  341446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:26:04.687515  341446 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:26:04.699122  341446 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:26:04.781001  341446 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:26:04.859594  341446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:26:04.872048  341446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:26:04.885489  341446 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1221 20:26:04.885549  341446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:04.894136  341446 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:26:04.894194  341446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:04.902773  341446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:04.910925  341446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:04.918939  341446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:26:04.926391  341446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:04.934591  341446 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:04.942819  341446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:04.951372  341446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:26:04.958778  341446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:26:04.966075  341446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:05.044295  341446 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:26:05.178571  341446 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:26:05.178634  341446 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:26:05.182556  341446 start.go:564] Will wait 60s for crictl version
	I1221 20:26:05.182614  341446 ssh_runner.go:195] Run: which crictl
	I1221 20:26:05.186001  341446 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:26:05.209272  341446 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:26:05.209360  341446 ssh_runner.go:195] Run: crio --version
	I1221 20:26:05.236518  341446 ssh_runner.go:195] Run: crio --version
	I1221 20:26:05.266429  341446 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1221 20:26:05.267797  341446 cli_runner.go:164] Run: docker network inspect old-k8s-version-699289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:26:05.285366  341446 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:26:05.289361  341446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:05.299010  341446 kubeadm.go:884] updating cluster {Name:old-k8s-version-699289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-699289 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:26:05.299115  341446 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1221 20:26:05.299159  341446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:05.329296  341446 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:05.329319  341446 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:26:05.329369  341446 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:05.355296  341446 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:05.355319  341446 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:26:05.355326  341446 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1221 20:26:05.355422  341446 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-699289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-699289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:26:05.355485  341446 ssh_runner.go:195] Run: crio config
	I1221 20:26:05.402915  341446 cni.go:84] Creating CNI manager for ""
	I1221 20:26:05.402946  341446 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:05.402964  341446 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:26:05.402993  341446 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-699289 NodeName:old-k8s-version-699289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:26:05.403188  341446 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-699289"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:26:05.403283  341446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1221 20:26:05.411574  341446 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:26:05.411637  341446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:26:05.418965  341446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:26:05.431405  341446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:26:05.443117  341446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1221 20:26:05.454697  341446 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:26:05.458059  341446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:05.467290  341446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:05.544899  341446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:05.572492  341446 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289 for IP: 192.168.76.2
	I1221 20:26:05.572512  341446 certs.go:195] generating shared ca certs ...
	I1221 20:26:05.572527  341446 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:05.572697  341446 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:26:05.572752  341446 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:26:05.572766  341446 certs.go:257] generating profile certs ...
	I1221 20:26:05.572886  341446 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/client.key
	I1221 20:26:05.572970  341446 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/apiserver.key.e2e45a7c
	I1221 20:26:05.573025  341446 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/proxy-client.key
	I1221 20:26:05.573155  341446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:26:05.573210  341446 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:26:05.573235  341446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:26:05.573273  341446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:26:05.573307  341446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:26:05.573343  341446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:26:05.573399  341446 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:05.574131  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:26:05.592186  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:26:05.610110  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:26:05.627914  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:26:05.648194  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1221 20:26:05.669814  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:26:05.686194  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:26:05.702812  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/old-k8s-version-699289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:26:05.719834  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:26:05.737537  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:26:05.754650  341446 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:26:05.772793  341446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:26:05.784757  341446 ssh_runner.go:195] Run: openssl version
	I1221 20:26:05.790463  341446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:26:05.797201  341446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:26:05.804018  341446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:26:05.807424  341446 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:26:05.807497  341446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:26:05.844024  341446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:26:05.851766  341446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:05.859434  341446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:26:05.870419  341446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:05.875926  341446 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:05.875987  341446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:05.911634  341446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:26:05.918760  341446 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:26:05.926735  341446 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:26:05.933671  341446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:26:05.937117  341446 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:26:05.937163  341446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:26:05.974559  341446 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:26:05.982294  341446 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:26:05.985972  341446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:26:06.020067  341446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:26:06.053801  341446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:26:06.099302  341446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:26:06.146740  341446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:26:06.201289  341446 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:26:06.250454  341446 kubeadm.go:401] StartCluster: {Name:old-k8s-version-699289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-699289 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:06.250561  341446 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:26:06.250640  341446 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:26:06.286630  341446 cri.go:96] found id: "d1fb79aa0d924fff93f096054d4a46f8a8baf20e2df92302469d3c1b72a950b5"
	I1221 20:26:06.286654  341446 cri.go:96] found id: "f568d82d77c18300e44677d66b6b0bc4c5ba3b7d94a1b4f5b47db27571852dc4"
	I1221 20:26:06.286660  341446 cri.go:96] found id: "5fc8d02fce78360a2559c2f88b3c8e6e49a518cd94d46fcb3f5554e34a4b6559"
	I1221 20:26:06.286665  341446 cri.go:96] found id: "64bce6865fb1a19663efbee434032c3951a1e1d68bb578e204142222a2c6880d"
	I1221 20:26:06.286669  341446 cri.go:96] found id: ""
	I1221 20:26:06.286726  341446 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:26:06.301177  341446 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:06Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:26:06.301290  341446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:26:06.310805  341446 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:26:06.310821  341446 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:26:06.310870  341446 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:26:06.317938  341446 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:26:06.319118  341446 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-699289" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:06.319916  341446 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-699289" cluster setting kubeconfig missing "old-k8s-version-699289" context setting]
	I1221 20:26:06.321118  341446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:06.323405  341446 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:26:06.331009  341446 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1221 20:26:06.331041  341446 kubeadm.go:602] duration metric: took 20.213438ms to restartPrimaryControlPlane
	I1221 20:26:06.331051  341446 kubeadm.go:403] duration metric: took 80.608081ms to StartCluster
	I1221 20:26:06.331067  341446 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:06.331140  341446 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:06.333402  341446 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:06.333668  341446 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:26:06.333771  341446 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:26:06.333863  341446 config.go:182] Loaded profile config "old-k8s-version-699289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1221 20:26:06.333868  341446 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-699289"
	I1221 20:26:06.333887  341446 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-699289"
	W1221 20:26:06.333895  341446 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:26:06.333903  341446 addons.go:70] Setting dashboard=true in profile "old-k8s-version-699289"
	I1221 20:26:06.333922  341446 addons.go:239] Setting addon dashboard=true in "old-k8s-version-699289"
	W1221 20:26:06.333932  341446 addons.go:248] addon dashboard should already be in state true
	I1221 20:26:06.333932  341446 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-699289"
	I1221 20:26:06.333946  341446 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-699289"
	I1221 20:26:06.333949  341446 host.go:66] Checking if "old-k8s-version-699289" exists ...
	I1221 20:26:06.333923  341446 host.go:66] Checking if "old-k8s-version-699289" exists ...
	I1221 20:26:06.334340  341446 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:26:06.334622  341446 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:26:06.334800  341446 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:26:06.335933  341446 out.go:179] * Verifying Kubernetes components...
	I1221 20:26:06.337287  341446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:06.363142  341446 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1221 20:26:06.364274  341446 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-699289"
	W1221 20:26:06.364298  341446 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:26:06.364327  341446 host.go:66] Checking if "old-k8s-version-699289" exists ...
	I1221 20:26:06.364790  341446 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:26:06.365664  341446 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:26:06.365666  341446 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:26:02.338788  339032 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-766361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:26:02.356401  339032 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1221 20:26:02.360685  339032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:02.370668  339032 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:26:02.370802  339032 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:26:02.370849  339032 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:02.400867  339032 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:02.400886  339032 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:26:02.400926  339032 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:02.425999  339032 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:02.426018  339032 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:26:02.426029  339032 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1221 20:26:02.426132  339032 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-766361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:26:02.426215  339032 ssh_runner.go:195] Run: crio config
	I1221 20:26:02.470936  339032 cni.go:84] Creating CNI manager for ""
	I1221 20:26:02.470957  339032 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:02.470972  339032 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:26:02.470994  339032 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766361 NodeName:default-k8s-diff-port-766361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:26:02.471104  339032 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766361"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:26:02.471167  339032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:26:02.479566  339032 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:26:02.479643  339032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:26:02.487725  339032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1221 20:26:02.500655  339032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:26:02.515248  339032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1221 20:26:02.527178  339032 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:26:02.530534  339032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:02.540545  339032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:02.619205  339032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:02.647382  339032 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361 for IP: 192.168.103.2
	I1221 20:26:02.647412  339032 certs.go:195] generating shared ca certs ...
	I1221 20:26:02.647431  339032 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:02.647595  339032 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:26:02.647656  339032 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:26:02.647668  339032 certs.go:257] generating profile certs ...
	I1221 20:26:02.647733  339032 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.key
	I1221 20:26:02.647748  339032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.crt with IP's: []
	I1221 20:26:02.817353  339032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.crt ...
	I1221 20:26:02.817380  339032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.crt: {Name:mkb54dad5bd85db717b06c47aa37d8aa7c4ee744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:02.817536  339032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.key ...
	I1221 20:26:02.817550  339032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.key: {Name:mkf10e94a925931ca8e46fd3da42be0f6c7f5c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:02.817632  339032 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key.07b6dc53
	I1221 20:26:02.817649  339032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt.07b6dc53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1221 20:26:02.879678  339032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt.07b6dc53 ...
	I1221 20:26:02.879709  339032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt.07b6dc53: {Name:mke940c81f53a7668394909f9ea2980c945fade3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:02.879867  339032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key.07b6dc53 ...
	I1221 20:26:02.879884  339032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key.07b6dc53: {Name:mk96dd90505473c55c0dfe3f6b6a60e3e885e744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:02.879968  339032 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt.07b6dc53 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt
	I1221 20:26:02.880053  339032 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key.07b6dc53 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key
	I1221 20:26:02.880125  339032 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key
	I1221 20:26:02.880144  339032 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.crt with IP's: []
	I1221 20:26:02.905076  339032 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.crt ...
	I1221 20:26:02.905107  339032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.crt: {Name:mk905614b9a1840b2b864c764307d765da9cab39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:02.905270  339032 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key ...
	I1221 20:26:02.905286  339032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key: {Name:mk1d788a9541bdfee9d3d2e2153b09b81000deda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:02.905472  339032 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:26:02.905515  339032 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:26:02.905527  339032 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:26:02.905558  339032 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:26:02.905586  339032 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:26:02.905613  339032 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:26:02.905659  339032 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:02.906239  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:26:02.923957  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:26:02.941964  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:26:02.958392  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:26:02.974719  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1221 20:26:02.991918  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:26:03.009881  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:26:03.026614  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:26:03.042884  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:26:03.061652  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:26:03.077904  339032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:26:03.094419  339032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:26:03.106327  339032 ssh_runner.go:195] Run: openssl version
	I1221 20:26:03.112034  339032 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:26:03.118907  339032 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:26:03.125852  339032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:26:03.129239  339032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:26:03.129283  339032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:26:03.164199  339032 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:26:03.171794  339032 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127112.pem /etc/ssl/certs/3ec20f2e.0
	I1221 20:26:03.179944  339032 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:03.187262  339032 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:26:03.194289  339032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:03.197677  339032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:03.197723  339032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:03.232724  339032 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:26:03.240276  339032 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 20:26:03.247355  339032 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:26:03.254413  339032 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:26:03.261545  339032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:26:03.264952  339032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:26:03.265000  339032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:26:03.298282  339032 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:26:03.306025  339032 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12711.pem /etc/ssl/certs/51391683.0
	I1221 20:26:03.313280  339032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:26:03.316844  339032 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 20:26:03.316900  339032 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:03.316997  339032 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:26:03.317043  339032 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:26:03.343343  339032 cri.go:96] found id: ""
	I1221 20:26:03.343417  339032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:26:03.351161  339032 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 20:26:03.359281  339032 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 20:26:03.359341  339032 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 20:26:03.366514  339032 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 20:26:03.366531  339032 kubeadm.go:158] found existing configuration files:
	
	I1221 20:26:03.366568  339032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1221 20:26:03.373952  339032 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 20:26:03.373992  339032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 20:26:03.380988  339032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1221 20:26:03.388415  339032 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 20:26:03.388461  339032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 20:26:03.395451  339032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1221 20:26:03.403137  339032 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 20:26:03.403185  339032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 20:26:03.411683  339032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1221 20:26:03.418988  339032 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 20:26:03.419029  339032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 20:26:03.426249  339032 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 20:26:03.490382  339032 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 20:26:03.553238  339032 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 20:26:06.367144  341446 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:06.367161  341446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:26:06.367213  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:06.367292  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:26:06.367304  341446 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:26:06.367337  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:06.396414  341446 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:06.396490  341446 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:26:06.396614  341446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:06.402078  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:06.403865  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:06.426316  341446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:06.521423  341446 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:06.522805  341446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:06.527371  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:26:06.527394  341446 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:26:06.539492  341446 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-699289" to be "Ready" ...
	I1221 20:26:06.545007  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:26:06.545031  341446 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:26:06.550841  341446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:06.560412  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:26:06.560431  341446 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:26:06.576751  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:26:06.576776  341446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:26:06.593266  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:26:06.593287  341446 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:26:06.610832  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:26:06.610858  341446 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:26:06.629517  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:26:06.629540  341446 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:26:06.645783  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:26:06.645806  341446 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:26:06.658751  341446 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:06.658775  341446 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:26:06.675724  341446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:08.282173  341446 node_ready.go:49] node "old-k8s-version-699289" is "Ready"
	I1221 20:26:08.282210  341446 node_ready.go:38] duration metric: took 1.742688842s for node "old-k8s-version-699289" to be "Ready" ...
	I1221 20:26:08.282301  341446 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:26:08.282370  341446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:26:09.249793  341446 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.698922209s)
	I1221 20:26:09.250140  341446 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.727303469s)
	I1221 20:26:09.747761  341446 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.07198595s)
	I1221 20:26:09.747960  341446 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.465554763s)
	I1221 20:26:09.748001  341446 api_server.go:72] duration metric: took 3.414304549s to wait for apiserver process to appear ...
	I1221 20:26:09.748013  341446 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:26:09.748034  341446 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:26:09.751707  341446 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-699289 addons enable metrics-server
	
	I1221 20:26:09.754779  341446 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Dec 21 20:25:58 embed-certs-413073 crio[771]: time="2025-12-21T20:25:58.571153396Z" level=info msg="Started container" PID=1944 containerID=ad3a12fb9014ab7a7bf38db9d3d6dd0a4c2f75bab3dc2271a65e893ab56ef2fb description=kube-system/coredns-66bc5c9577-lvwlf/coredns id=284cce73-5e77-4e8e-beaf-19be5bf4742a name=/runtime.v1.RuntimeService/StartContainer sandboxID=716563a128ddde237f41d1f01896eb8f09c00cf0a118dbdb88ff96eb1580a99e
	Dec 21 20:25:58 embed-certs-413073 crio[771]: time="2025-12-21T20:25:58.57181137Z" level=info msg="Started container" PID=1943 containerID=f84e44577bd9a23892b7e045cd61e9d8d09e2eee3c35a07c9e99912d15874ed9 description=kube-system/storage-provisioner/storage-provisioner id=9bc2340b-f644-46b0-8a28-f39d30af41ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=b0167f1df2be790493c4398d9db07c5be27eb4176814b6d51c4d864a596dd548
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.060631364Z" level=info msg="Running pod sandbox: default/busybox/POD" id=05957903-00c3-445e-bb46-66fad6e5ae9a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.060717036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.066199711Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e3be12e2063bd7d1f58ce3009c451e2bd67d23210c73817c94196447290956d UID:c2722ae7-f2fd-49a5-9cff-6e02e1ffca0f NetNS:/var/run/netns/3a824ba8-f14b-40e4-b1e4-fed5b3c9bbbf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005b8658}] Aliases:map[]}"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.066259076Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.075642744Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e3be12e2063bd7d1f58ce3009c451e2bd67d23210c73817c94196447290956d UID:c2722ae7-f2fd-49a5-9cff-6e02e1ffca0f NetNS:/var/run/netns/3a824ba8-f14b-40e4-b1e4-fed5b3c9bbbf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005b8658}] Aliases:map[]}"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.075789706Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.076558848Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.077689919Z" level=info msg="Ran pod sandbox 5e3be12e2063bd7d1f58ce3009c451e2bd67d23210c73817c94196447290956d with infra container: default/busybox/POD" id=05957903-00c3-445e-bb46-66fad6e5ae9a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.078888372Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8112d3f0-6c40-4cfe-86a0-8fe12e460020 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.078985547Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8112d3f0-6c40-4cfe-86a0-8fe12e460020 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.079014455Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8112d3f0-6c40-4cfe-86a0-8fe12e460020 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.079553999Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ba5488ac-4d1b-41e2-b69f-d5814fa3785e name=/runtime.v1.ImageService/PullImage
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.081364655Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.708125763Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ba5488ac-4d1b-41e2-b69f-d5814fa3785e name=/runtime.v1.ImageService/PullImage
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.708756182Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea9a1be7-1f0b-4d0e-97d3-2cdda34ff6d0 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.709983885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e8209004-5de9-4b87-91f3-ea6e97cca010 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.712949145Z" level=info msg="Creating container: default/busybox/busybox" id=205a01c5-99b0-4bb6-ba6e-7fe5c24ae016 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.713057175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.71646839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.716861899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.746088505Z" level=info msg="Created container c16c02231066f754216745a0d6e119e10deb7fe6d2538a21ceb1a84c6dd3679d: default/busybox/busybox" id=205a01c5-99b0-4bb6-ba6e-7fe5c24ae016 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.746737151Z" level=info msg="Starting container: c16c02231066f754216745a0d6e119e10deb7fe6d2538a21ceb1a84c6dd3679d" id=19646552-45e5-4741-af62-39cb6bd05702 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:01 embed-certs-413073 crio[771]: time="2025-12-21T20:26:01.74861173Z" level=info msg="Started container" PID=2019 containerID=c16c02231066f754216745a0d6e119e10deb7fe6d2538a21ceb1a84c6dd3679d description=default/busybox/busybox id=19646552-45e5-4741-af62-39cb6bd05702 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e3be12e2063bd7d1f58ce3009c451e2bd67d23210c73817c94196447290956d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c16c02231066f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   5e3be12e2063b       busybox                                      default
	ad3a12fb9014a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   716563a128ddd       coredns-66bc5c9577-lvwlf                     kube-system
	f84e44577bd9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   b0167f1df2be7       storage-provisioner                          kube-system
	085bd391dccd9       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   901defb1d068f       kindnet-qnfsx                                kube-system
	44cb06d287d4e       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      24 seconds ago      Running             kube-proxy                0                   d70ed6d0d3d52       kube-proxy-qvdzm                             kube-system
	34b9c5610c43f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      34 seconds ago      Running             kube-controller-manager   0                   8b36155200d91       kube-controller-manager-embed-certs-413073   kube-system
	b297dfb6f98bf       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      34 seconds ago      Running             kube-apiserver            0                   642554192f819       kube-apiserver-embed-certs-413073            kube-system
	fcc6c50e5d769       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      34 seconds ago      Running             kube-scheduler            0                   6b8507bf66ef4       kube-scheduler-embed-certs-413073            kube-system
	1868b98b48786       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   31b4e6b1d3e4b       etcd-embed-certs-413073                      kube-system
	
	
	==> coredns [ad3a12fb9014ab7a7bf38db9d3d6dd0a4c2f75bab3dc2271a65e893ab56ef2fb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45537 - 64506 "HINFO IN 3870284608902127237.9181828125050983502. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.093385068s
	
	
	==> describe nodes <==
	Name:               embed-certs-413073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-413073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=embed-certs-413073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-413073
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:26:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:25:58 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:25:58 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:25:58 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:25:58 +0000   Sun, 21 Dec 2025 20:25:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-413073
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                c4a4cac5-f7ed-43b3-8fd7-2b463810496e
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-lvwlf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-413073                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-qnfsx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-413073             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-413073    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-qvdzm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-413073             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 36s)  kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 36s)  kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 36s)  kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-413073 event: Registered Node embed-certs-413073 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-413073 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [1868b98b48786334edb5379e51e53634881d474385302c964a1035488524005a] <==
	{"level":"warn","ts":"2025-12-21T20:25:36.656737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.666631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.673553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.681176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.688460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.695286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.701527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.708798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.715394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.730415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.738707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.745128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.752816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.759432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.766578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.774471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.781923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.789167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.796088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.802986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.824483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.828278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.836514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.843584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:25:36.897400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53590","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:26:10 up  1:08,  0 user,  load average: 3.71, 3.72, 2.65
	Linux embed-certs-413073 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [085bd391dccd9b004a56af41587d796dfd0d7268cdfcafe45a65d1e34b2994ae] <==
	I1221 20:25:47.760190       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:25:47.760519       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1221 20:25:47.760687       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:25:47.760709       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:25:47.760735       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:25:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:25:48.059159       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:25:48.059285       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:25:48.059316       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:25:48.059519       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:25:48.359472       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:25:48.359508       1 metrics.go:72] Registering metrics
	I1221 20:25:48.359582       1 controller.go:711] "Syncing nftables rules"
	I1221 20:25:57.966655       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:25:57.966742       1 main.go:301] handling current node
	I1221 20:26:07.965121       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:26:07.965175       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b297dfb6f98bfe07f599b54d6bd0c71d5f1fce283cf3b96d5fa85567304a4b87] <==
	E1221 20:25:37.505950       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1221 20:25:37.538380       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:25:37.541297       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:37.541806       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1221 20:25:37.548158       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:37.548717       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 20:25:37.708494       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:25:38.340762       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1221 20:25:38.345720       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1221 20:25:38.345735       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:25:38.812368       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:25:38.848071       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:25:38.945079       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 20:25:38.950685       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1221 20:25:38.951626       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:25:38.955443       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:25:39.376341       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:25:39.984020       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:25:39.992144       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 20:25:40.001700       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:25:44.780874       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:44.785652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:25:45.427422       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1221 20:25:45.479030       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1221 20:26:08.899091       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:38896: use of closed network connection
	
	
	==> kube-controller-manager [34b9c5610c43f579c04ea91047810aae196213124ea6539350f5997b36c7f058] <==
	I1221 20:25:44.374242       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1221 20:25:44.374219       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1221 20:25:44.374219       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:25:44.374276       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 20:25:44.374284       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 20:25:44.375525       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1221 20:25:44.375574       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1221 20:25:44.375576       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1221 20:25:44.375637       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1221 20:25:44.375706       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1221 20:25:44.376837       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1221 20:25:44.376862       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 20:25:44.376893       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1221 20:25:44.376916       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 20:25:44.376935       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1221 20:25:44.376945       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1221 20:25:44.379252       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 20:25:44.379442       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1221 20:25:44.381748       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 20:25:44.383021       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:25:44.386127       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1221 20:25:44.386249       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:25:44.393398       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1221 20:25:44.402908       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:25:59.333763       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [44cb06d287d4e683feb91f9fe6166345b065b0b5f1a1a7394a069268a2fd9da1] <==
	I1221 20:25:45.920255       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:25:45.991969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:25:46.092707       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:25:46.092760       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1221 20:25:46.092844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:25:46.112138       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:25:46.112194       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:25:46.117194       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:25:46.117642       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:25:46.117658       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:25:46.119121       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:25:46.119151       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:25:46.119189       1 config.go:200] "Starting service config controller"
	I1221 20:25:46.119199       1 config.go:309] "Starting node config controller"
	I1221 20:25:46.119211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:25:46.119200       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:25:46.119338       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:25:46.119344       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:25:46.219680       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:25:46.219718       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:25:46.219714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:25:46.219737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [fcc6c50e5d7690a52823c52e3270e6775b7ab544c5dd5246c8fb563f86e02dda] <==
	E1221 20:25:37.394307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1221 20:25:37.394343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 20:25:37.394589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 20:25:37.395713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 20:25:37.395764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 20:25:37.395814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1221 20:25:37.395845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 20:25:37.395906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 20:25:37.395925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 20:25:37.396044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 20:25:37.396047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 20:25:37.396093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 20:25:37.396157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 20:25:38.260426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 20:25:38.326063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 20:25:38.369360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 20:25:38.436080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 20:25:38.502592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1221 20:25:38.536967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1221 20:25:38.583317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 20:25:38.601798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 20:25:38.610956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 20:25:38.627430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 20:25:38.775118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1221 20:25:41.092822       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:25:40 embed-certs-413073 kubelet[1351]: I1221 20:25:40.914689    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-413073" podStartSLOduration=1.914681228 podStartE2EDuration="1.914681228s" podCreationTimestamp="2025-12-21 20:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:40.914305063 +0000 UTC m=+1.164844214" watchObservedRunningTime="2025-12-21 20:25:40.914681228 +0000 UTC m=+1.165220378"
	Dec 21 20:25:40 embed-certs-413073 kubelet[1351]: I1221 20:25:40.928068    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-413073" podStartSLOduration=1.9280488999999998 podStartE2EDuration="1.9280489s" podCreationTimestamp="2025-12-21 20:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:40.92749911 +0000 UTC m=+1.178038257" watchObservedRunningTime="2025-12-21 20:25:40.9280489 +0000 UTC m=+1.178588050"
	Dec 21 20:25:40 embed-certs-413073 kubelet[1351]: I1221 20:25:40.938507    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-413073" podStartSLOduration=1.938444592 podStartE2EDuration="1.938444592s" podCreationTimestamp="2025-12-21 20:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:40.937985946 +0000 UTC m=+1.188525086" watchObservedRunningTime="2025-12-21 20:25:40.938444592 +0000 UTC m=+1.188983745"
	Dec 21 20:25:44 embed-certs-413073 kubelet[1351]: I1221 20:25:44.378103    1351 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 21 20:25:44 embed-certs-413073 kubelet[1351]: I1221 20:25:44.378886    1351 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464398    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe58c6e7-54ff-4b21-9574-3529a25f66d1-lib-modules\") pod \"kindnet-qnfsx\" (UID: \"fe58c6e7-54ff-4b21-9574-3529a25f66d1\") " pod="kube-system/kindnet-qnfsx"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464479    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/654663b3-137f-4beb-8dac-3d7db7fba22e-kube-proxy\") pod \"kube-proxy-qvdzm\" (UID: \"654663b3-137f-4beb-8dac-3d7db7fba22e\") " pod="kube-system/kube-proxy-qvdzm"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464503    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/654663b3-137f-4beb-8dac-3d7db7fba22e-lib-modules\") pod \"kube-proxy-qvdzm\" (UID: \"654663b3-137f-4beb-8dac-3d7db7fba22e\") " pod="kube-system/kube-proxy-qvdzm"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464548    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sv65\" (UniqueName: \"kubernetes.io/projected/654663b3-137f-4beb-8dac-3d7db7fba22e-kube-api-access-4sv65\") pod \"kube-proxy-qvdzm\" (UID: \"654663b3-137f-4beb-8dac-3d7db7fba22e\") " pod="kube-system/kube-proxy-qvdzm"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464580    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fe58c6e7-54ff-4b21-9574-3529a25f66d1-cni-cfg\") pod \"kindnet-qnfsx\" (UID: \"fe58c6e7-54ff-4b21-9574-3529a25f66d1\") " pod="kube-system/kindnet-qnfsx"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464639    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe58c6e7-54ff-4b21-9574-3529a25f66d1-xtables-lock\") pod \"kindnet-qnfsx\" (UID: \"fe58c6e7-54ff-4b21-9574-3529a25f66d1\") " pod="kube-system/kindnet-qnfsx"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464685    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2skjv\" (UniqueName: \"kubernetes.io/projected/fe58c6e7-54ff-4b21-9574-3529a25f66d1-kube-api-access-2skjv\") pod \"kindnet-qnfsx\" (UID: \"fe58c6e7-54ff-4b21-9574-3529a25f66d1\") " pod="kube-system/kindnet-qnfsx"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.464716    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/654663b3-137f-4beb-8dac-3d7db7fba22e-xtables-lock\") pod \"kube-proxy-qvdzm\" (UID: \"654663b3-137f-4beb-8dac-3d7db7fba22e\") " pod="kube-system/kube-proxy-qvdzm"
	Dec 21 20:25:45 embed-certs-413073 kubelet[1351]: I1221 20:25:45.906004    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvdzm" podStartSLOduration=0.905928214 podStartE2EDuration="905.928214ms" podCreationTimestamp="2025-12-21 20:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:45.905966147 +0000 UTC m=+6.156505291" watchObservedRunningTime="2025-12-21 20:25:45.905928214 +0000 UTC m=+6.156467366"
	Dec 21 20:25:47 embed-certs-413073 kubelet[1351]: I1221 20:25:47.922911    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qnfsx" podStartSLOduration=1.256399969 podStartE2EDuration="2.922889253s" podCreationTimestamp="2025-12-21 20:25:45 +0000 UTC" firstStartedPulling="2025-12-21 20:25:45.769529262 +0000 UTC m=+6.020068404" lastFinishedPulling="2025-12-21 20:25:47.436018547 +0000 UTC m=+7.686557688" observedRunningTime="2025-12-21 20:25:47.922768747 +0000 UTC m=+8.173307897" watchObservedRunningTime="2025-12-21 20:25:47.922889253 +0000 UTC m=+8.173428404"
	Dec 21 20:25:58 embed-certs-413073 kubelet[1351]: I1221 20:25:58.191995    1351 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 21 20:25:58 embed-certs-413073 kubelet[1351]: I1221 20:25:58.254058    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqn2\" (UniqueName: \"kubernetes.io/projected/a901db92-ff3c-4b7d-b391-9265924cb998-kube-api-access-bxqn2\") pod \"storage-provisioner\" (UID: \"a901db92-ff3c-4b7d-b391-9265924cb998\") " pod="kube-system/storage-provisioner"
	Dec 21 20:25:58 embed-certs-413073 kubelet[1351]: I1221 20:25:58.254120    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5-config-volume\") pod \"coredns-66bc5c9577-lvwlf\" (UID: \"8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5\") " pod="kube-system/coredns-66bc5c9577-lvwlf"
	Dec 21 20:25:58 embed-certs-413073 kubelet[1351]: I1221 20:25:58.254199    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxk67\" (UniqueName: \"kubernetes.io/projected/8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5-kube-api-access-gxk67\") pod \"coredns-66bc5c9577-lvwlf\" (UID: \"8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5\") " pod="kube-system/coredns-66bc5c9577-lvwlf"
	Dec 21 20:25:58 embed-certs-413073 kubelet[1351]: I1221 20:25:58.254282    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a901db92-ff3c-4b7d-b391-9265924cb998-tmp\") pod \"storage-provisioner\" (UID: \"a901db92-ff3c-4b7d-b391-9265924cb998\") " pod="kube-system/storage-provisioner"
	Dec 21 20:25:58 embed-certs-413073 kubelet[1351]: I1221 20:25:58.929512    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.929493382 podStartE2EDuration="12.929493382s" podCreationTimestamp="2025-12-21 20:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:58.929471572 +0000 UTC m=+19.180010722" watchObservedRunningTime="2025-12-21 20:25:58.929493382 +0000 UTC m=+19.180032533"
	Dec 21 20:25:58 embed-certs-413073 kubelet[1351]: I1221 20:25:58.938338    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lvwlf" podStartSLOduration=13.93831892 podStartE2EDuration="13.93831892s" podCreationTimestamp="2025-12-21 20:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:25:58.938121134 +0000 UTC m=+19.188660283" watchObservedRunningTime="2025-12-21 20:25:58.93831892 +0000 UTC m=+19.188858070"
	Dec 21 20:26:00 embed-certs-413073 kubelet[1351]: I1221 20:26:00.870807    1351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfhzs\" (UniqueName: \"kubernetes.io/projected/c2722ae7-f2fd-49a5-9cff-6e02e1ffca0f-kube-api-access-rfhzs\") pod \"busybox\" (UID: \"c2722ae7-f2fd-49a5-9cff-6e02e1ffca0f\") " pod="default/busybox"
	Dec 21 20:26:01 embed-certs-413073 kubelet[1351]: I1221 20:26:01.939599    1351 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.309248483 podStartE2EDuration="1.939577018s" podCreationTimestamp="2025-12-21 20:26:00 +0000 UTC" firstStartedPulling="2025-12-21 20:26:01.079211377 +0000 UTC m=+21.329750522" lastFinishedPulling="2025-12-21 20:26:01.709539913 +0000 UTC m=+21.960079057" observedRunningTime="2025-12-21 20:26:01.93947805 +0000 UTC m=+22.190017199" watchObservedRunningTime="2025-12-21 20:26:01.939577018 +0000 UTC m=+22.190116169"
	Dec 21 20:26:08 embed-certs-413073 kubelet[1351]: E1221 20:26:08.899194    1351 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52950->127.0.0.1:36933: write tcp 127.0.0.1:52950->127.0.0.1:36933: write: broken pipe
	
	
	==> storage-provisioner [f84e44577bd9a23892b7e045cd61e9d8d09e2eee3c35a07c9e99912d15874ed9] <==
	I1221 20:25:58.583627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:25:58.591654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:25:58.591689       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:25:58.593655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:58.597462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:25:58.597607       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:25:58.597767       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce2740a9-39c8-4989-95c5-9081eeb21fd3", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-413073_5920b19c-417a-479b-a836-e6ba335ec4c9 became leader
	I1221 20:25:58.597814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-413073_5920b19c-417a-479b-a836-e6ba335ec4c9!
	W1221 20:25:58.599560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:25:58.603102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:25:58.698838       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-413073_5920b19c-417a-479b-a836-e6ba335ec4c9!
	W1221 20:26:00.607175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:00.612796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:02.616011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:02.619777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:04.623658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:04.632748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:06.635690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:06.640070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:08.644387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:08.650001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:10.653091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:10.657300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413073 -n embed-certs-413073
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-413073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (244.575484ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-766361 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-766361 describe deploy/metrics-server -n kube-system: exit status 1 (59.826532ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-766361 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-766361
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-766361:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e",
	        "Created": "2025-12-21T20:25:56.399803234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 340173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:25:56.435461434Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/hosts",
	        "LogPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e-json.log",
	        "Name": "/default-k8s-diff-port-766361",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-766361:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-766361",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e",
	                "LowerDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-766361",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-766361/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-766361",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-766361",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-766361",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fa0eaf8c9fbc926619b1ea3b69a6ec203c7badd7f6330d977f404416977486ec",
	            "SandboxKey": "/var/run/docker/netns/fa0eaf8c9fbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-766361": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da966e5bad965057a3f23332d40d7f74bcb84482d07b5154dbfb77c723cfe0cd",
	                    "EndpointID": "4205b2c499eb1fda745816425c8fbb31da674c20284c1fa3e4a0ae64350c11c4",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "02:b3:0a:3c:1f:37",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-766361",
	                        "7b1bfe9daca1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-766361 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-766361 logs -n 25: (1.217108913s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-149976 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ ssh     │ -p bridge-149976 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo containerd config dump                                                                                                                                                                                                  │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo crio config                                                                                                                                                                                                             │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p bridge-149976                                                                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p disable-driver-mounts-903813                                                                                                                                                                                                               │ disable-driver-mounts-903813 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p no-preload-328404 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:26:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:26:28.281119  349045 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:26:28.281492  349045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:28.281537  349045 out.go:374] Setting ErrFile to fd 2...
	I1221 20:26:28.281548  349045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:28.282030  349045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:26:28.282872  349045 out.go:368] Setting JSON to false
	I1221 20:26:28.284367  349045 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4137,"bootTime":1766344651,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:26:28.284471  349045 start.go:143] virtualization: kvm guest
	I1221 20:26:28.286327  349045 out.go:179] * [embed-certs-413073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:26:28.287913  349045 notify.go:221] Checking for updates...
	I1221 20:26:28.287922  349045 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:26:28.288955  349045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:26:28.290004  349045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:28.291148  349045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:26:28.292120  349045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:26:28.293183  349045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:26:28.294636  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:28.295218  349045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:26:28.318950  349045 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:26:28.319033  349045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:26:28.383757  349045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-21 20:26:28.371053987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:26:28.383909  349045 docker.go:319] overlay module found
	I1221 20:26:28.386158  349045 out.go:179] * Using the docker driver based on existing profile
	I1221 20:26:28.387267  349045 start.go:309] selected driver: docker
	I1221 20:26:28.387285  349045 start.go:928] validating driver "docker" against &{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:28.387394  349045 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:26:28.388083  349045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:26:28.452534  349045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-21 20:26:28.440765419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:26:28.452814  349045 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:28.452841  349045 cni.go:84] Creating CNI manager for ""
	I1221 20:26:28.452894  349045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:28.452946  349045 start.go:353] cluster config:
	{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:28.455294  349045 out.go:179] * Starting "embed-certs-413073" primary control-plane node in "embed-certs-413073" cluster
	I1221 20:26:28.456605  349045 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:26:28.457837  349045 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:26:28.458961  349045 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:26:28.458999  349045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 20:26:28.459012  349045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:26:28.459022  349045 cache.go:65] Caching tarball of preloaded images
	I1221 20:26:28.459126  349045 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:26:28.459141  349045 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 20:26:28.459294  349045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:26:28.483646  349045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:26:28.483671  349045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:26:28.483693  349045 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:26:28.483740  349045 start.go:360] acquireMachinesLock for embed-certs-413073: {Name:mkd7ba395e71c68e48a93bb569cce5d8b29847bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:26:28.483811  349045 start.go:364] duration metric: took 47.571µs to acquireMachinesLock for "embed-certs-413073"
	I1221 20:26:28.483834  349045 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:26:28.483841  349045 fix.go:54] fixHost starting: 
	I1221 20:26:28.484078  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:28.505358  349045 fix.go:112] recreateIfNeeded on embed-certs-413073: state=Stopped err=<nil>
	W1221 20:26:28.505394  349045 fix.go:138] unexpected machine state, will restart: <nil>
	W1221 20:26:25.351188  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:27.876459  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	I1221 20:26:25.658690  345910 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1221 20:26:25.663742  345910 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1221 20:26:25.665156  345910 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:26:25.665185  345910 api_server.go:131] duration metric: took 1.007516766s to wait for apiserver health ...
	I1221 20:26:25.665204  345910 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:25.669277  345910 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:25.669362  345910 system_pods.go:61] "coredns-7d764666f9-wkztz" [c790011a-9ad3-4344-b9ec-e5f3cfba2f21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:25.669384  345910 system_pods.go:61] "etcd-no-preload-328404" [ea4eeda5-7c80-4ff1-9a63-4d83e93c4398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:25.669405  345910 system_pods.go:61] "kindnet-txb2h" [ff8c4aab-19f6-4e7d-9f4f-e3e499a57017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:25.669418  345910 system_pods.go:61] "kube-apiserver-no-preload-328404" [229781bb-351d-4049-abb6-02f9d6bb3d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:25.669427  345910 system_pods.go:61] "kube-controller-manager-no-preload-328404" [a03a3720-eeef-44f8-8b3d-ccf98acf3f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:25.669436  345910 system_pods.go:61] "kube-proxy-tnpxj" [81c501a3-fe67-425e-b459-5d9e8783d67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:25.669450  345910 system_pods.go:61] "kube-scheduler-no-preload-328404" [50f29152-4dd3-4f93-ba1a-324538708448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:25.669462  345910 system_pods.go:61] "storage-provisioner" [3e9e0ecd-7bb1-456d-97d6-436ccd273c6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:25.669470  345910 system_pods.go:74] duration metric: took 4.2593ms to wait for pod list to return data ...
	I1221 20:26:25.669480  345910 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:25.672042  345910 default_sa.go:45] found service account: "default"
	I1221 20:26:25.672063  345910 default_sa.go:55] duration metric: took 2.57644ms for default service account to be created ...
	I1221 20:26:25.672072  345910 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:25.674803  345910 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:25.674837  345910 system_pods.go:89] "coredns-7d764666f9-wkztz" [c790011a-9ad3-4344-b9ec-e5f3cfba2f21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:25.674847  345910 system_pods.go:89] "etcd-no-preload-328404" [ea4eeda5-7c80-4ff1-9a63-4d83e93c4398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:25.674857  345910 system_pods.go:89] "kindnet-txb2h" [ff8c4aab-19f6-4e7d-9f4f-e3e499a57017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:25.674870  345910 system_pods.go:89] "kube-apiserver-no-preload-328404" [229781bb-351d-4049-abb6-02f9d6bb3d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:25.674885  345910 system_pods.go:89] "kube-controller-manager-no-preload-328404" [a03a3720-eeef-44f8-8b3d-ccf98acf3f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:25.674904  345910 system_pods.go:89] "kube-proxy-tnpxj" [81c501a3-fe67-425e-b459-5d9e8783d67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:25.674916  345910 system_pods.go:89] "kube-scheduler-no-preload-328404" [50f29152-4dd3-4f93-ba1a-324538708448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:25.674927  345910 system_pods.go:89] "storage-provisioner" [3e9e0ecd-7bb1-456d-97d6-436ccd273c6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:25.674937  345910 system_pods.go:126] duration metric: took 2.858367ms to wait for k8s-apps to be running ...
	I1221 20:26:25.674946  345910 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:25.674994  345910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:25.692567  345910 system_svc.go:56] duration metric: took 17.613432ms WaitForService to wait for kubelet
	I1221 20:26:25.692625  345910 kubeadm.go:587] duration metric: took 2.843019767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:25.692650  345910 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:25.696171  345910 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:25.696196  345910 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:25.696214  345910 node_conditions.go:105] duration metric: took 3.549535ms to run NodePressure ...
	I1221 20:26:25.696258  345910 start.go:242] waiting for startup goroutines ...
	I1221 20:26:25.696273  345910 start.go:247] waiting for cluster config update ...
	I1221 20:26:25.696292  345910 start.go:256] writing updated cluster config ...
	I1221 20:26:25.696578  345910 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:25.700995  345910 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:25.705329  345910 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wkztz" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:26:27.711550  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:30.211912  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:27.041802  339032 node_ready.go:57] node "default-k8s-diff-port-766361" has "Ready":"False" status (will retry)
	W1221 20:26:29.538369  339032 node_ready.go:57] node "default-k8s-diff-port-766361" has "Ready":"False" status (will retry)
	I1221 20:26:31.038840  339032 node_ready.go:49] node "default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:31.038875  339032 node_ready.go:38] duration metric: took 12.50377621s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:26:31.038892  339032 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:26:31.038958  339032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:26:31.054404  339032 api_server.go:72] duration metric: took 12.784988284s to wait for apiserver process to appear ...
	I1221 20:26:31.054443  339032 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:26:31.054466  339032 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:26:31.062787  339032 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1221 20:26:31.064052  339032 api_server.go:141] control plane version: v1.34.3
	I1221 20:26:31.064087  339032 api_server.go:131] duration metric: took 9.635216ms to wait for apiserver health ...
	I1221 20:26:31.064097  339032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:31.068373  339032 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:31.068406  339032 system_pods.go:61] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.068414  339032 system_pods.go:61] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.068421  339032 system_pods.go:61] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.068428  339032 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.068433  339032 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.068438  339032 system_pods.go:61] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.068450  339032 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.068459  339032 system_pods.go:61] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:31.068470  339032 system_pods.go:74] duration metric: took 4.365658ms to wait for pod list to return data ...
	I1221 20:26:31.068481  339032 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:31.071303  339032 default_sa.go:45] found service account: "default"
	I1221 20:26:31.071323  339032 default_sa.go:55] duration metric: took 2.831663ms for default service account to be created ...
	I1221 20:26:31.071332  339032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:31.074677  339032 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:31.074711  339032 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.074720  339032 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.074727  339032 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.074733  339032 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.074739  339032 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.074745  339032 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.074750  339032 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.074761  339032 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:31.074793  339032 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1221 20:26:31.357381  339032 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:31.357419  339032 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.357429  339032 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.357449  339032 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.357455  339032 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.357465  339032 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.357477  339032 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.357487  339032 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.357495  339032 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:26:31.357504  339032 system_pods.go:126] duration metric: took 286.165238ms to wait for k8s-apps to be running ...
	I1221 20:26:31.357517  339032 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:31.357569  339032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:31.374162  339032 system_svc.go:56] duration metric: took 16.636072ms WaitForService to wait for kubelet
	I1221 20:26:31.374199  339032 kubeadm.go:587] duration metric: took 13.104782839s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:31.374252  339032 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:31.377689  339032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:31.377718  339032 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:31.377806  339032 node_conditions.go:105] duration metric: took 3.541844ms to run NodePressure ...
	I1221 20:26:31.377821  339032 start.go:242] waiting for startup goroutines ...
	I1221 20:26:31.377832  339032 start.go:247] waiting for cluster config update ...
	I1221 20:26:31.377847  339032 start.go:256] writing updated cluster config ...
	I1221 20:26:31.378180  339032 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:31.382766  339032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:31.386785  339032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:28.506776  349045 out.go:252] * Restarting existing docker container for "embed-certs-413073" ...
	I1221 20:26:28.506853  349045 cli_runner.go:164] Run: docker start embed-certs-413073
	I1221 20:26:28.754220  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:28.772197  349045 kic.go:430] container "embed-certs-413073" state is running.
	I1221 20:26:28.772613  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:28.791483  349045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:26:28.791662  349045 machine.go:94] provisionDockerMachine start ...
	I1221 20:26:28.791717  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:28.811016  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:28.811307  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:28.811325  349045 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:26:28.811830  349045 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41650->127.0.0.1:33124: read: connection reset by peer
	I1221 20:26:31.973490  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:26:31.973519  349045 ubuntu.go:182] provisioning hostname "embed-certs-413073"
	I1221 20:26:31.973592  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:31.998312  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:31.998627  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:31.998655  349045 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-413073 && echo "embed-certs-413073" | sudo tee /etc/hostname
	I1221 20:26:32.169162  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:26:32.169295  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.197522  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:32.197833  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:32.197860  349045 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-413073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-413073/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-413073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:26:32.356078  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:26:32.356105  349045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:26:32.356129  349045 ubuntu.go:190] setting up certificates
	I1221 20:26:32.356139  349045 provision.go:84] configureAuth start
	I1221 20:26:32.356205  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:32.380995  349045 provision.go:143] copyHostCerts
	I1221 20:26:32.381067  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:26:32.381088  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:26:32.381158  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:26:32.381336  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:26:32.381352  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:26:32.381399  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:26:32.381517  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:26:32.381528  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:26:32.381563  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:26:32.381652  349045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.embed-certs-413073 san=[127.0.0.1 192.168.94.2 embed-certs-413073 localhost minikube]
	I1221 20:26:32.479184  349045 provision.go:177] copyRemoteCerts
	I1221 20:26:32.479284  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:26:32.479340  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.505304  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:32.615477  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:26:32.637386  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1221 20:26:32.657941  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 20:26:32.679246  349045 provision.go:87] duration metric: took 323.089087ms to configureAuth
	I1221 20:26:32.679276  349045 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:26:32.679495  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:32.679620  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.704097  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:32.704422  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:32.704452  349045 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:26:32.393030  339032 pod_ready.go:94] pod "coredns-66bc5c9577-bp67f" is "Ready"
	I1221 20:26:32.393060  339032 pod_ready.go:86] duration metric: took 1.006253441s for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.395886  339032 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.400375  339032 pod_ready.go:94] pod "etcd-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.400399  339032 pod_ready.go:86] duration metric: took 4.491012ms for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.403288  339032 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.408032  339032 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.408055  339032 pod_ready.go:86] duration metric: took 4.736601ms for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.410124  339032 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.590191  339032 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.590243  339032 pod_ready.go:86] duration metric: took 180.076227ms for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.790437  339032 pod_ready.go:83] waiting for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.190998  339032 pod_ready.go:94] pod "kube-proxy-w9lgb" is "Ready"
	I1221 20:26:33.191030  339032 pod_ready.go:86] duration metric: took 400.559576ms for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.390945  339032 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.790606  339032 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:33.790642  339032 pod_ready.go:86] duration metric: took 399.665202ms for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.790658  339032 pod_ready.go:40] duration metric: took 2.40784924s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:33.840839  339032 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:26:33.865033  339032 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-766361" cluster and "default" namespace by default
	W1221 20:26:30.348013  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:32.348358  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:32.212243  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:34.711111  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:26:34.122259  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:26:34.122300  349045 machine.go:97] duration metric: took 5.330623534s to provisionDockerMachine
	I1221 20:26:34.122318  349045 start.go:293] postStartSetup for "embed-certs-413073" (driver="docker")
	I1221 20:26:34.122332  349045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:26:34.122408  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:26:34.122462  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.145201  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.245112  349045 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:26:34.248686  349045 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:26:34.248719  349045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:26:34.248731  349045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:26:34.248796  349045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:26:34.248891  349045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:26:34.248979  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:26:34.257867  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:34.276308  349045 start.go:296] duration metric: took 153.975025ms for postStartSetup
	I1221 20:26:34.276373  349045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:26:34.276431  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.295030  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.390399  349045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:26:34.395009  349045 fix.go:56] duration metric: took 5.911162905s for fixHost
	I1221 20:26:34.395034  349045 start.go:83] releasing machines lock for "embed-certs-413073", held for 5.911210955s
	I1221 20:26:34.395103  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:34.415658  349045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:26:34.415718  349045 ssh_runner.go:195] Run: cat /version.json
	I1221 20:26:34.415753  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.415772  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.437353  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.438191  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.532205  349045 ssh_runner.go:195] Run: systemctl --version
	I1221 20:26:34.590519  349045 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:26:34.626292  349045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:26:34.631256  349045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:26:34.631323  349045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:26:34.640212  349045 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:26:34.640261  349045 start.go:496] detecting cgroup driver to use...
	I1221 20:26:34.640296  349045 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:26:34.640339  349045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:26:34.655152  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:26:34.666935  349045 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:26:34.666995  349045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:26:34.681162  349045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:26:34.694205  349045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:26:34.773836  349045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:26:34.866636  349045 docker.go:234] disabling docker service ...
	I1221 20:26:34.866704  349045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:26:34.883877  349045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:26:34.897764  349045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:26:34.992795  349045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:26:35.089519  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:26:35.101885  349045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:26:35.117012  349045 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:26:35.117071  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.125693  349045 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:26:35.125742  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.135514  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.144405  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.153105  349045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:26:35.161280  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.170948  349045 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.181393  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.190217  349045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:26:35.197559  349045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:26:35.204474  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:35.282887  349045 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:26:35.491553  349045 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:26:35.491642  349045 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:26:35.496405  349045 start.go:564] Will wait 60s for crictl version
	I1221 20:26:35.496470  349045 ssh_runner.go:195] Run: which crictl
	I1221 20:26:35.500988  349045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:26:35.525158  349045 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:26:35.525280  349045 ssh_runner.go:195] Run: crio --version
	I1221 20:26:35.553291  349045 ssh_runner.go:195] Run: crio --version
	I1221 20:26:35.582458  349045 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:26:35.583603  349045 cli_runner.go:164] Run: docker network inspect embed-certs-413073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:26:35.601409  349045 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1221 20:26:35.605668  349045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:35.616349  349045 kubeadm.go:884] updating cluster {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:26:35.616474  349045 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:26:35.616526  349045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:35.647904  349045 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:35.647924  349045 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:26:35.647969  349045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:35.672746  349045 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:35.672771  349045 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:26:35.672778  349045 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.3 crio true true} ...
	I1221 20:26:35.672870  349045 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-413073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:26:35.672941  349045 ssh_runner.go:195] Run: crio config
	I1221 20:26:35.721006  349045 cni.go:84] Creating CNI manager for ""
	I1221 20:26:35.721028  349045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:35.721041  349045 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:26:35.721060  349045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-413073 NodeName:embed-certs-413073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:26:35.721172  349045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-413073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:26:35.721262  349045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:26:35.729396  349045 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:26:35.729469  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:26:35.737076  349045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1221 20:26:35.749458  349045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:26:35.762014  349045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1221 20:26:35.776095  349045 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:26:35.779907  349045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:35.790127  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:35.871764  349045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:35.894457  349045 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073 for IP: 192.168.94.2
	I1221 20:26:35.894479  349045 certs.go:195] generating shared ca certs ...
	I1221 20:26:35.894498  349045 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:35.894692  349045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:26:35.894757  349045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:26:35.894773  349045 certs.go:257] generating profile certs ...
	I1221 20:26:35.894903  349045 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.key
	I1221 20:26:35.894982  349045 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key.865f7206
	I1221 20:26:35.895039  349045 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key
	I1221 20:26:35.895195  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:26:35.895255  349045 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:26:35.895269  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:26:35.895316  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:26:35.895359  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:26:35.895394  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:26:35.895460  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:35.896857  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:26:35.918148  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:26:35.937363  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:26:35.956791  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:26:35.980319  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1221 20:26:35.998307  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 20:26:36.016864  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:26:36.035412  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:26:36.052147  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:26:36.068514  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:26:36.085864  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:26:36.104067  349045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:26:36.116311  349045 ssh_runner.go:195] Run: openssl version
	I1221 20:26:36.122281  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.129549  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:26:36.137800  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.141357  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.141422  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.177095  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:26:36.184709  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.191985  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:26:36.199039  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.202890  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.202936  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.237305  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:26:36.244698  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.251690  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:26:36.258834  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.262601  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.262651  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.297116  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:26:36.304458  349045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:26:36.308039  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:26:36.343506  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:26:36.379330  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:26:36.419930  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:26:36.467926  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:26:36.517528  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:26:36.569807  349045 kubeadm.go:401] StartCluster: {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:36.569934  349045 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:26:36.570012  349045 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:26:36.603596  349045 cri.go:96] found id: "020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce"
	I1221 20:26:36.603620  349045 cri.go:96] found id: "9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676"
	I1221 20:26:36.603626  349045 cri.go:96] found id: "c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536"
	I1221 20:26:36.603631  349045 cri.go:96] found id: "d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f"
	I1221 20:26:36.603635  349045 cri.go:96] found id: ""
	I1221 20:26:36.603694  349045 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:26:36.615256  349045 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:36Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:26:36.615332  349045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:26:36.623063  349045 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:26:36.623081  349045 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:26:36.623168  349045 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:26:36.630509  349045 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:26:36.631520  349045 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-413073" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:36.632152  349045 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-413073" cluster setting kubeconfig missing "embed-certs-413073" context setting]
	I1221 20:26:36.633238  349045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.635239  349045 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:26:36.642696  349045 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1221 20:26:36.642724  349045 kubeadm.go:602] duration metric: took 19.637661ms to restartPrimaryControlPlane
	I1221 20:26:36.642733  349045 kubeadm.go:403] duration metric: took 72.941162ms to StartCluster
	I1221 20:26:36.642749  349045 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.642804  349045 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:36.644942  349045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.645178  349045 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:26:36.645266  349045 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:26:36.645373  349045 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-413073"
	I1221 20:26:36.645392  349045 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-413073"
	W1221 20:26:36.645404  349045 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:26:36.645407  349045 addons.go:70] Setting dashboard=true in profile "embed-certs-413073"
	I1221 20:26:36.645432  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.645434  349045 addons.go:239] Setting addon dashboard=true in "embed-certs-413073"
	I1221 20:26:36.645440  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:36.645468  349045 addons.go:70] Setting default-storageclass=true in profile "embed-certs-413073"
	W1221 20:26:36.645444  349045 addons.go:248] addon dashboard should already be in state true
	I1221 20:26:36.645494  349045 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-413073"
	I1221 20:26:36.645510  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.645796  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.645906  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.645963  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.647026  349045 out.go:179] * Verifying Kubernetes components...
	I1221 20:26:36.648142  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:36.670830  349045 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1221 20:26:36.671909  349045 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:26:36.671982  349045 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:26:36.672921  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:26:36.672938  349045 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:26:36.672981  349045 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:36.672995  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.672999  349045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:26:36.673047  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.673322  349045 addons.go:239] Setting addon default-storageclass=true in "embed-certs-413073"
	W1221 20:26:36.673343  349045 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:26:36.673379  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.673831  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.713604  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.716336  349045 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:36.716359  349045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:26:36.716417  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.717556  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.740725  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.799504  349045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:36.813392  349045 node_ready.go:35] waiting up to 6m0s for node "embed-certs-413073" to be "Ready" ...
	I1221 20:26:36.827736  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:36.831307  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:26:36.831331  349045 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:26:36.847340  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:26:36.847361  349045 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:26:36.857774  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:36.864116  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:26:36.864135  349045 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:26:36.880513  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:26:36.880541  349045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:26:36.895508  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:26:36.895533  349045 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:26:36.909454  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:26:36.909478  349045 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:26:36.923439  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:26:36.923466  349045 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:26:36.936237  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:26:36.936258  349045 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:26:36.948470  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:36.948487  349045 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:26:36.960580  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:38.055702  349045 node_ready.go:49] node "embed-certs-413073" is "Ready"
	I1221 20:26:38.055739  349045 node_ready.go:38] duration metric: took 1.242302482s for node "embed-certs-413073" to be "Ready" ...
	I1221 20:26:38.055756  349045 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:26:38.055807  349045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:26:38.565557  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.70771546s)
	I1221 20:26:38.566433  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.605814489s)
	I1221 20:26:38.566655  349045 api_server.go:72] duration metric: took 1.921448818s to wait for apiserver process to appear ...
	I1221 20:26:38.566678  349045 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:26:38.566680  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.738896864s)
	I1221 20:26:38.566700  349045 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:26:38.571884  349045 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-413073 addons enable metrics-server
	
	I1221 20:26:38.572921  349045 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:26:38.573011  349045 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:26:38.580646  349045 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1221 20:26:34.847391  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:36.849413  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:39.348748  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:36.714889  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:39.210974  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 21 20:26:31 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:31.116544985Z" level=info msg="Starting container: 17135be1d4c25fe2f970da0c6990529266f4654cfb02c996685558361b64b1ce" id=17cbed31-65ce-4343-a8b0-d482c26183d3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:31 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:31.119536505Z" level=info msg="Started container" PID=1914 containerID=17135be1d4c25fe2f970da0c6990529266f4654cfb02c996685558361b64b1ce description=kube-system/coredns-66bc5c9577-bp67f/coredns id=17cbed31-65ce-4343-a8b0-d482c26183d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a284eaf9f88b9f22c411ced24f6c31f78ee208d94f3500b7e2b515d8fb30b02
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.389308947Z" level=info msg="Running pod sandbox: default/busybox/POD" id=04f79f79-1c6c-4a20-a1a7-a96748068498 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.389389497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.394217762Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1118b67e2fd5195fa305d5d68f6fff7f3f8948f1054b47ba98e39093b52b1c7f UID:ea115a67-2180-409c-8faf-3057c284c92d NetNS:/var/run/netns/fd45c0df-927a-4fba-9aa0-1f87e1aa709f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b564c0}] Aliases:map[]}"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.394266027Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.403373691Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1118b67e2fd5195fa305d5d68f6fff7f3f8948f1054b47ba98e39093b52b1c7f UID:ea115a67-2180-409c-8faf-3057c284c92d NetNS:/var/run/netns/fd45c0df-927a-4fba-9aa0-1f87e1aa709f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b564c0}] Aliases:map[]}"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.403528717Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.404267132Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.404955872Z" level=info msg="Ran pod sandbox 1118b67e2fd5195fa305d5d68f6fff7f3f8948f1054b47ba98e39093b52b1c7f with infra container: default/busybox/POD" id=04f79f79-1c6c-4a20-a1a7-a96748068498 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.406076349Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f2c1507-f64f-4762-89ed-cf04481cf54b name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.406196511Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0f2c1507-f64f-4762-89ed-cf04481cf54b name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.406267307Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0f2c1507-f64f-4762-89ed-cf04481cf54b name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.406807761Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c4614ac9-db84-40eb-bb58-631d0658760d name=/runtime.v1.ImageService/PullImage
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.409119548Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.985566427Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c4614ac9-db84-40eb-bb58-631d0658760d name=/runtime.v1.ImageService/PullImage
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.986153552Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7ed610b9-8788-4235-9118-47705e782d1a name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.987479772Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=96eeea8a-3e09-4363-9655-f8985db13153 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.990725984Z" level=info msg="Creating container: default/busybox/busybox" id=142d503b-1b65-41e3-a227-a51b961e0b16 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.990854153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.994496876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:34 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:34.994882785Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:35 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:35.024348411Z" level=info msg="Created container b8feb50f9cf511aa9b62f245f9d3beb42f82f1fc31f45839018cc9b1b83f8f74: default/busybox/busybox" id=142d503b-1b65-41e3-a227-a51b961e0b16 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:35 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:35.031171428Z" level=info msg="Starting container: b8feb50f9cf511aa9b62f245f9d3beb42f82f1fc31f45839018cc9b1b83f8f74" id=80019460-fb9e-4b5c-8091-b081d2f92d48 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:35 default-k8s-diff-port-766361 crio[781]: time="2025-12-21T20:26:35.034994381Z" level=info msg="Started container" PID=1991 containerID=b8feb50f9cf511aa9b62f245f9d3beb42f82f1fc31f45839018cc9b1b83f8f74 description=default/busybox/busybox id=80019460-fb9e-4b5c-8091-b081d2f92d48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1118b67e2fd5195fa305d5d68f6fff7f3f8948f1054b47ba98e39093b52b1c7f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b8feb50f9cf51       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   1118b67e2fd51       busybox                                                default
	17135be1d4c25       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   4a284eaf9f88b       coredns-66bc5c9577-bp67f                               kube-system
	965ff4d48c6f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   d99eea5c439c5       storage-provisioner                                    kube-system
	e375ad2180e83       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   021cbeacbef79       kindnet-td7vw                                          kube-system
	2dcbd67eb1045       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      25 seconds ago      Running             kube-proxy                0                   87376a0100dd0       kube-proxy-w9lgb                                       kube-system
	70ee7f282aca8       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      36 seconds ago      Running             kube-scheduler            0                   275aee132c81b       kube-scheduler-default-k8s-diff-port-766361            kube-system
	5b1e36816261e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      36 seconds ago      Running             kube-apiserver            0                   35d2cafafcaf9       kube-apiserver-default-k8s-diff-port-766361            kube-system
	82cae945290f1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      36 seconds ago      Running             kube-controller-manager   0                   c31a3d314dcce       kube-controller-manager-default-k8s-diff-port-766361   kube-system
	fc336254840ab       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   38792cfe02875       etcd-default-k8s-diff-port-766361                      kube-system
	
	
	==> coredns [17135be1d4c25fe2f970da0c6990529266f4654cfb02c996685558361b64b1ce] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55789 - 53658 "HINFO IN 1616265699179993718.7192845169067255442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.516154052s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-766361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-766361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=default-k8s-diff-port-766361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_26_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:26:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-766361
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:26:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:26:42 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:26:42 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:26:42 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:26:42 +0000   Sun, 21 Dec 2025 20:26:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-766361
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                fa186ebe-d952-42e9-84eb-564f086c9a9b
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-bp67f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-766361                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-td7vw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-766361             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-766361    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-w9lgb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-766361             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 37s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 37s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 37s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-766361 event: Registered Node default-k8s-diff-port-766361 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-766361 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [fc336254840abe3e8a78f688e1be40cb4f9dbc686c63f62bfc7b41ec4265b896] <==
	{"level":"warn","ts":"2025-12-21T20:26:08.966333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:08.994821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.013254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.025834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.035686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.048604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.060660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.072759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.084502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.105029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.114479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.138387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.153371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.167472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.179449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.196217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.205468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.223911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.228468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.247473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.259035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.287560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.297316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.307520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:09.384838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:26:43 up  1:09,  0 user,  load average: 4.55, 3.90, 2.75
	Linux default-k8s-diff-port-766361 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e375ad2180e83d7a56bf2040e09ad782d064fc2047264c9e3a2caf566c3914a1] <==
	I1221 20:26:20.413989       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:26:20.414298       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1221 20:26:20.414458       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:26:20.414484       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:26:20.414515       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:26:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:26:20.616771       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:26:20.616823       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:26:20.616839       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:26:20.617529       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:26:21.218363       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:26:21.218387       1 metrics.go:72] Registering metrics
	I1221 20:26:21.218429       1 controller.go:711] "Syncing nftables rules"
	I1221 20:26:30.618080       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:26:30.618135       1 main.go:301] handling current node
	I1221 20:26:40.621173       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:26:40.621205       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5b1e36816261e5f53cf4537d631caf99a1a387c656fd8861976a183e09ae5e10] <==
	I1221 20:26:09.980155       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1221 20:26:09.982930       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1221 20:26:09.987639       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 20:26:09.992208       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1221 20:26:10.023925       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1221 20:26:10.066790       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:26:10.163162       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:26:10.868588       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1221 20:26:10.876118       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1221 20:26:10.876143       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:26:11.353857       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:26:11.401399       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:26:11.473027       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 20:26:11.479663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1221 20:26:11.480905       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:26:11.485398       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:26:11.882866       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:26:12.257746       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:26:12.265476       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 20:26:12.274297       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:26:17.635872       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:26:17.639519       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:26:17.784399       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:26:17.983316       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1221 20:26:42.174689       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:56120: use of closed network connection
	
	
	==> kube-controller-manager [82cae945290f1942ec1017b4c2be41f5d3e3b4dce5bd0972fa63703f7c023e36] <==
	I1221 20:26:16.840654       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-766361" podCIDRs=["10.244.0.0/24"]
	I1221 20:26:16.855724       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:26:16.880949       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 20:26:16.880978       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1221 20:26:16.881014       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1221 20:26:16.881164       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1221 20:26:16.881276       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-766361"
	I1221 20:26:16.881328       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1221 20:26:16.882200       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1221 20:26:16.883356       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1221 20:26:16.883441       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1221 20:26:16.883567       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1221 20:26:16.883475       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 20:26:16.883461       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 20:26:16.883675       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1221 20:26:16.883691       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 20:26:16.884423       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 20:26:16.884532       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1221 20:26:16.884657       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:26:16.885533       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1221 20:26:16.886657       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1221 20:26:16.886684       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1221 20:26:16.896272       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:26:16.905598       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:26:31.882840       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2dcbd67eb1045903a25769831ace1633831eb88ae3a717d33766f0104e574bb3] <==
	I1221 20:26:18.501848       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:26:18.572674       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:26:18.673906       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:26:18.673951       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1221 20:26:18.674035       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:26:18.696714       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:26:18.696778       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:26:18.702661       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:26:18.703086       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:26:18.703122       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:18.704602       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:26:18.704829       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:26:18.704659       1 config.go:200] "Starting service config controller"
	I1221 20:26:18.704865       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:26:18.704724       1 config.go:309] "Starting node config controller"
	I1221 20:26:18.704882       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:26:18.704888       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:26:18.704763       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:26:18.704896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:26:18.805013       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:26:18.805036       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:26:18.805024       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [70ee7f282aca8182c8abcb6438253fce78a16b0232cd7e562644f05f14906993] <==
	E1221 20:26:09.929798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1221 20:26:09.929920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1221 20:26:09.930010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 20:26:09.930112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1221 20:26:09.930530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 20:26:09.930884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1221 20:26:09.930725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 20:26:09.930820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1221 20:26:09.930818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1221 20:26:09.930913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 20:26:09.931043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1221 20:26:09.931263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1221 20:26:09.931374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 20:26:09.931611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 20:26:10.759551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1221 20:26:10.870459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1221 20:26:10.907983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1221 20:26:10.953199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1221 20:26:10.965421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1221 20:26:10.965635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1221 20:26:10.978189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1221 20:26:11.026404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1221 20:26:11.085472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1221 20:26:11.325581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1221 20:26:13.426251       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:26:13 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:13.139469    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-766361" podStartSLOduration=2.139426992 podStartE2EDuration="2.139426992s" podCreationTimestamp="2025-12-21 20:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:26:13.139399167 +0000 UTC m=+1.122874105" watchObservedRunningTime="2025-12-21 20:26:13.139426992 +0000 UTC m=+1.122901927"
	Dec 21 20:26:13 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:13.151374    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-766361" podStartSLOduration=1.151301312 podStartE2EDuration="1.151301312s" podCreationTimestamp="2025-12-21 20:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:26:13.151208937 +0000 UTC m=+1.134683875" watchObservedRunningTime="2025-12-21 20:26:13.151301312 +0000 UTC m=+1.134776250"
	Dec 21 20:26:13 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:13.174330    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-766361" podStartSLOduration=1.174303606 podStartE2EDuration="1.174303606s" podCreationTimestamp="2025-12-21 20:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:26:13.160316029 +0000 UTC m=+1.143790994" watchObservedRunningTime="2025-12-21 20:26:13.174303606 +0000 UTC m=+1.157778546"
	Dec 21 20:26:13 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:13.192815    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-766361" podStartSLOduration=1.192790631 podStartE2EDuration="1.192790631s" podCreationTimestamp="2025-12-21 20:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:26:13.174834899 +0000 UTC m=+1.158309842" watchObservedRunningTime="2025-12-21 20:26:13.192790631 +0000 UTC m=+1.176265569"
	Dec 21 20:26:16 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:16.916767    1305 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 21 20:26:16 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:16.918520    1305 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.020817    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0917f5ab-1135-421c-b15c-096a64269fab-kube-proxy\") pod \"kube-proxy-w9lgb\" (UID: \"0917f5ab-1135-421c-b15c-096a64269fab\") " pod="kube-system/kube-proxy-w9lgb"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.020849    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0917f5ab-1135-421c-b15c-096a64269fab-xtables-lock\") pod \"kube-proxy-w9lgb\" (UID: \"0917f5ab-1135-421c-b15c-096a64269fab\") " pod="kube-system/kube-proxy-w9lgb"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.020863    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0917f5ab-1135-421c-b15c-096a64269fab-lib-modules\") pod \"kube-proxy-w9lgb\" (UID: \"0917f5ab-1135-421c-b15c-096a64269fab\") " pod="kube-system/kube-proxy-w9lgb"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.020876    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678-cni-cfg\") pod \"kindnet-td7vw\" (UID: \"75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678\") " pod="kube-system/kindnet-td7vw"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.020892    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678-lib-modules\") pod \"kindnet-td7vw\" (UID: \"75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678\") " pod="kube-system/kindnet-td7vw"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.020917    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltfks\" (UniqueName: \"kubernetes.io/projected/0917f5ab-1135-421c-b15c-096a64269fab-kube-api-access-ltfks\") pod \"kube-proxy-w9lgb\" (UID: \"0917f5ab-1135-421c-b15c-096a64269fab\") " pod="kube-system/kube-proxy-w9lgb"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.020978    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678-xtables-lock\") pod \"kindnet-td7vw\" (UID: \"75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678\") " pod="kube-system/kindnet-td7vw"
	Dec 21 20:26:18 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:18.021015    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b57p\" (UniqueName: \"kubernetes.io/projected/75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678-kube-api-access-2b57p\") pod \"kindnet-td7vw\" (UID: \"75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678\") " pod="kube-system/kindnet-td7vw"
	Dec 21 20:26:19 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:19.165148    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w9lgb" podStartSLOduration=2.165124824 podStartE2EDuration="2.165124824s" podCreationTimestamp="2025-12-21 20:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:26:19.143603405 +0000 UTC m=+7.127078350" watchObservedRunningTime="2025-12-21 20:26:19.165124824 +0000 UTC m=+7.148599764"
	Dec 21 20:26:21 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:21.143185    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-td7vw" podStartSLOduration=2.257037602 podStartE2EDuration="4.143163381s" podCreationTimestamp="2025-12-21 20:26:17 +0000 UTC" firstStartedPulling="2025-12-21 20:26:18.320911496 +0000 UTC m=+6.304386433" lastFinishedPulling="2025-12-21 20:26:20.207037283 +0000 UTC m=+8.190512212" observedRunningTime="2025-12-21 20:26:21.143044925 +0000 UTC m=+9.126519862" watchObservedRunningTime="2025-12-21 20:26:21.143163381 +0000 UTC m=+9.126638318"
	Dec 21 20:26:30 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:30.717919    1305 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 21 20:26:30 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:30.816532    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5xd\" (UniqueName: \"kubernetes.io/projected/17b70c90-6d4f-48e6-9fa7-a491c9720564-kube-api-access-hj5xd\") pod \"coredns-66bc5c9577-bp67f\" (UID: \"17b70c90-6d4f-48e6-9fa7-a491c9720564\") " pod="kube-system/coredns-66bc5c9577-bp67f"
	Dec 21 20:26:30 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:30.816585    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/852bdfc6-9902-475e-90d4-df19a02320fc-tmp\") pod \"storage-provisioner\" (UID: \"852bdfc6-9902-475e-90d4-df19a02320fc\") " pod="kube-system/storage-provisioner"
	Dec 21 20:26:30 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:30.816605    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b70c90-6d4f-48e6-9fa7-a491c9720564-config-volume\") pod \"coredns-66bc5c9577-bp67f\" (UID: \"17b70c90-6d4f-48e6-9fa7-a491c9720564\") " pod="kube-system/coredns-66bc5c9577-bp67f"
	Dec 21 20:26:30 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:30.816682    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrltp\" (UniqueName: \"kubernetes.io/projected/852bdfc6-9902-475e-90d4-df19a02320fc-kube-api-access-hrltp\") pod \"storage-provisioner\" (UID: \"852bdfc6-9902-475e-90d4-df19a02320fc\") " pod="kube-system/storage-provisioner"
	Dec 21 20:26:31 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:31.191098    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bp67f" podStartSLOduration=13.191073641 podStartE2EDuration="13.191073641s" podCreationTimestamp="2025-12-21 20:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:26:31.173870443 +0000 UTC m=+19.157345381" watchObservedRunningTime="2025-12-21 20:26:31.191073641 +0000 UTC m=+19.174548583"
	Dec 21 20:26:32 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:32.182822    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.182798534 podStartE2EDuration="14.182798534s" podCreationTimestamp="2025-12-21 20:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:26:31.190969413 +0000 UTC m=+19.174444352" watchObservedRunningTime="2025-12-21 20:26:32.182798534 +0000 UTC m=+20.166273473"
	Dec 21 20:26:34 default-k8s-diff-port-766361 kubelet[1305]: I1221 20:26:34.139160    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfznt\" (UniqueName: \"kubernetes.io/projected/ea115a67-2180-409c-8faf-3057c284c92d-kube-api-access-kfznt\") pod \"busybox\" (UID: \"ea115a67-2180-409c-8faf-3057c284c92d\") " pod="default/busybox"
	Dec 21 20:26:42 default-k8s-diff-port-766361 kubelet[1305]: E1221 20:26:42.174611    1305 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46046->127.0.0.1:41811: write tcp 127.0.0.1:46046->127.0.0.1:41811: write: broken pipe
	
	
	==> storage-provisioner [965ff4d48c6f29d5cd77387d0affa07990f245c2916e4476ae4cb80a7726593b] <==
	I1221 20:26:31.129744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:26:31.138455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:26:31.138509       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:26:31.141121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:31.147617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:26:31.147820       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:26:31.148163       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766361_90b25a45-b2c6-441d-aa0d-7e5c5a837e59!
	I1221 20:26:31.148010       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"577e970a-eb7c-428e-948b-c188b50d25b7", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-766361_90b25a45-b2c6-441d-aa0d-7e5c5a837e59 became leader
	W1221 20:26:31.151993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:31.157293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:26:31.248871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766361_90b25a45-b2c6-441d-aa0d-7e5c5a837e59!
	W1221 20:26:33.160926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:33.172417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:35.175955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:35.181207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:37.184783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:37.189532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:39.192923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:39.196980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:41.199728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:41.203580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:43.208019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:43.213542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-766361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-699289 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-699289 --alsologtostderr -v=1: exit status 80 (2.382478536s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-699289 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:26:55.082020  353337 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:26:55.082333  353337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:55.082344  353337 out.go:374] Setting ErrFile to fd 2...
	I1221 20:26:55.082348  353337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:55.082548  353337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:26:55.082781  353337 out.go:368] Setting JSON to false
	I1221 20:26:55.082801  353337 mustload.go:66] Loading cluster: old-k8s-version-699289
	I1221 20:26:55.083139  353337 config.go:182] Loaded profile config "old-k8s-version-699289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1221 20:26:55.083552  353337 cli_runner.go:164] Run: docker container inspect old-k8s-version-699289 --format={{.State.Status}}
	I1221 20:26:55.101193  353337 host.go:66] Checking if "old-k8s-version-699289" exists ...
	I1221 20:26:55.101463  353337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:26:55.161219  353337 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-21 20:26:55.150305837 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:26:55.161819  353337 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-699289 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdateno
tification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1221 20:26:55.163257  353337 out.go:179] * Pausing node old-k8s-version-699289 ... 
	I1221 20:26:55.164749  353337 host.go:66] Checking if "old-k8s-version-699289" exists ...
	I1221 20:26:55.165010  353337 ssh_runner.go:195] Run: systemctl --version
	I1221 20:26:55.165045  353337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-699289
	I1221 20:26:55.186608  353337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/old-k8s-version-699289/id_rsa Username:docker}
	I1221 20:26:55.283657  353337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:55.296001  353337 pause.go:52] kubelet running: true
	I1221 20:26:55.296073  353337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:26:55.460844  353337 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:26:55.460918  353337 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:26:55.525528  353337 cri.go:96] found id: "fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1"
	I1221 20:26:55.525548  353337 cri.go:96] found id: "1460cc6bb57e2081694d9423affc1178017a03dff842225c30fab505d7d2a95b"
	I1221 20:26:55.525552  353337 cri.go:96] found id: "d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc"
	I1221 20:26:55.525556  353337 cri.go:96] found id: "a0d586c455cc3e950fec3abf57e8834f990d21f159f890449ee01006af8b5ea3"
	I1221 20:26:55.525559  353337 cri.go:96] found id: "33c6adad84864cf2665448db090a10c1199353f3e0dc0eea36e033cd09d820ea"
	I1221 20:26:55.525563  353337 cri.go:96] found id: "d1fb79aa0d924fff93f096054d4a46f8a8baf20e2df92302469d3c1b72a950b5"
	I1221 20:26:55.525565  353337 cri.go:96] found id: "f568d82d77c18300e44677d66b6b0bc4c5ba3b7d94a1b4f5b47db27571852dc4"
	I1221 20:26:55.525567  353337 cri.go:96] found id: "5fc8d02fce78360a2559c2f88b3c8e6e49a518cd94d46fcb3f5554e34a4b6559"
	I1221 20:26:55.525570  353337 cri.go:96] found id: "64bce6865fb1a19663efbee434032c3951a1e1d68bb578e204142222a2c6880d"
	I1221 20:26:55.525579  353337 cri.go:96] found id: "65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	I1221 20:26:55.525584  353337 cri.go:96] found id: "26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6"
	I1221 20:26:55.525588  353337 cri.go:96] found id: ""
	I1221 20:26:55.525635  353337 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:26:55.537327  353337 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:55Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:26:55.695700  353337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:55.708835  353337 pause.go:52] kubelet running: false
	I1221 20:26:55.708883  353337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:26:55.846041  353337 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:26:55.846160  353337 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:26:55.909677  353337 cri.go:96] found id: "fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1"
	I1221 20:26:55.909698  353337 cri.go:96] found id: "1460cc6bb57e2081694d9423affc1178017a03dff842225c30fab505d7d2a95b"
	I1221 20:26:55.909702  353337 cri.go:96] found id: "d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc"
	I1221 20:26:55.909704  353337 cri.go:96] found id: "a0d586c455cc3e950fec3abf57e8834f990d21f159f890449ee01006af8b5ea3"
	I1221 20:26:55.909707  353337 cri.go:96] found id: "33c6adad84864cf2665448db090a10c1199353f3e0dc0eea36e033cd09d820ea"
	I1221 20:26:55.909711  353337 cri.go:96] found id: "d1fb79aa0d924fff93f096054d4a46f8a8baf20e2df92302469d3c1b72a950b5"
	I1221 20:26:55.909713  353337 cri.go:96] found id: "f568d82d77c18300e44677d66b6b0bc4c5ba3b7d94a1b4f5b47db27571852dc4"
	I1221 20:26:55.909716  353337 cri.go:96] found id: "5fc8d02fce78360a2559c2f88b3c8e6e49a518cd94d46fcb3f5554e34a4b6559"
	I1221 20:26:55.909719  353337 cri.go:96] found id: "64bce6865fb1a19663efbee434032c3951a1e1d68bb578e204142222a2c6880d"
	I1221 20:26:55.909729  353337 cri.go:96] found id: "65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	I1221 20:26:55.909732  353337 cri.go:96] found id: "26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6"
	I1221 20:26:55.909734  353337 cri.go:96] found id: ""
	I1221 20:26:55.909769  353337 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:26:56.185620  353337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:56.198326  353337 pause.go:52] kubelet running: false
	I1221 20:26:56.198374  353337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:26:56.341623  353337 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:26:56.341694  353337 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:26:56.406505  353337 cri.go:96] found id: "fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1"
	I1221 20:26:56.406526  353337 cri.go:96] found id: "1460cc6bb57e2081694d9423affc1178017a03dff842225c30fab505d7d2a95b"
	I1221 20:26:56.406541  353337 cri.go:96] found id: "d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc"
	I1221 20:26:56.406546  353337 cri.go:96] found id: "a0d586c455cc3e950fec3abf57e8834f990d21f159f890449ee01006af8b5ea3"
	I1221 20:26:56.406551  353337 cri.go:96] found id: "33c6adad84864cf2665448db090a10c1199353f3e0dc0eea36e033cd09d820ea"
	I1221 20:26:56.406557  353337 cri.go:96] found id: "d1fb79aa0d924fff93f096054d4a46f8a8baf20e2df92302469d3c1b72a950b5"
	I1221 20:26:56.406561  353337 cri.go:96] found id: "f568d82d77c18300e44677d66b6b0bc4c5ba3b7d94a1b4f5b47db27571852dc4"
	I1221 20:26:56.406566  353337 cri.go:96] found id: "5fc8d02fce78360a2559c2f88b3c8e6e49a518cd94d46fcb3f5554e34a4b6559"
	I1221 20:26:56.406571  353337 cri.go:96] found id: "64bce6865fb1a19663efbee434032c3951a1e1d68bb578e204142222a2c6880d"
	I1221 20:26:56.406579  353337 cri.go:96] found id: "65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	I1221 20:26:56.406587  353337 cri.go:96] found id: "26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6"
	I1221 20:26:56.406591  353337 cri.go:96] found id: ""
	I1221 20:26:56.406643  353337 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:26:57.166439  353337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:57.179206  353337 pause.go:52] kubelet running: false
	I1221 20:26:57.179275  353337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:26:57.317739  353337 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:26:57.317829  353337 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:26:57.383477  353337 cri.go:96] found id: "fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1"
	I1221 20:26:57.383501  353337 cri.go:96] found id: "1460cc6bb57e2081694d9423affc1178017a03dff842225c30fab505d7d2a95b"
	I1221 20:26:57.383507  353337 cri.go:96] found id: "d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc"
	I1221 20:26:57.383512  353337 cri.go:96] found id: "a0d586c455cc3e950fec3abf57e8834f990d21f159f890449ee01006af8b5ea3"
	I1221 20:26:57.383517  353337 cri.go:96] found id: "33c6adad84864cf2665448db090a10c1199353f3e0dc0eea36e033cd09d820ea"
	I1221 20:26:57.383521  353337 cri.go:96] found id: "d1fb79aa0d924fff93f096054d4a46f8a8baf20e2df92302469d3c1b72a950b5"
	I1221 20:26:57.383525  353337 cri.go:96] found id: "f568d82d77c18300e44677d66b6b0bc4c5ba3b7d94a1b4f5b47db27571852dc4"
	I1221 20:26:57.383529  353337 cri.go:96] found id: "5fc8d02fce78360a2559c2f88b3c8e6e49a518cd94d46fcb3f5554e34a4b6559"
	I1221 20:26:57.383533  353337 cri.go:96] found id: "64bce6865fb1a19663efbee434032c3951a1e1d68bb578e204142222a2c6880d"
	I1221 20:26:57.383541  353337 cri.go:96] found id: "65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	I1221 20:26:57.383545  353337 cri.go:96] found id: "26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6"
	I1221 20:26:57.383549  353337 cri.go:96] found id: ""
	I1221 20:26:57.383598  353337 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:26:57.397377  353337 out.go:203] 
	W1221 20:26:57.398558  353337 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 20:26:57.398574  353337 out.go:285] * 
	* 
	W1221 20:26:57.402547  353337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 20:26:57.404458  353337 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-699289 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-699289
helpers_test.go:244: (dbg) docker inspect old-k8s-version-699289:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3",
	        "Created": "2025-12-21T20:24:47.982475594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:25:59.574578126Z",
	            "FinishedAt": "2025-12-21T20:25:58.720206224Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/hosts",
	        "LogPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3-json.log",
	        "Name": "/old-k8s-version-699289",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-699289:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-699289",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3",
	                "LowerDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-699289",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-699289/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-699289",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-699289",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-699289",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "44f5a444d357151aade215d64d8fbf08a5f09ecad4d17a4d6f7120f032080072",
	            "SandboxKey": "/var/run/docker/netns/44f5a444d357",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-699289": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "99f5907a172c3f93121569e27574257a2eb119dd81f153d568f418838cd89542",
	                    "EndpointID": "e97be30236fbfb58a7f12abff0422eafca376eb3de07261f9bedf4965af472e0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:ec:3c:0c:72:9a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-699289",
	                        "e26e2b356a85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289: exit status 2 (312.093422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-699289 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-699289 logs -n 25: (1.052488468s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-149976 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo containerd config dump                                                                                                                                                                                                  │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo crio config                                                                                                                                                                                                             │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p bridge-149976                                                                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p disable-driver-mounts-903813                                                                                                                                                                                                               │ disable-driver-mounts-903813 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p no-preload-328404 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-766361 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ image   │ old-k8s-version-699289 image list --format=json                                                                                                                                                                                               │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:26:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:26:28.281119  349045 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:26:28.281492  349045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:28.281537  349045 out.go:374] Setting ErrFile to fd 2...
	I1221 20:26:28.281548  349045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:28.282030  349045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:26:28.282872  349045 out.go:368] Setting JSON to false
	I1221 20:26:28.284367  349045 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4137,"bootTime":1766344651,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:26:28.284471  349045 start.go:143] virtualization: kvm guest
	I1221 20:26:28.286327  349045 out.go:179] * [embed-certs-413073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:26:28.287913  349045 notify.go:221] Checking for updates...
	I1221 20:26:28.287922  349045 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:26:28.288955  349045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:26:28.290004  349045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:28.291148  349045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:26:28.292120  349045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:26:28.293183  349045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:26:28.294636  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:28.295218  349045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:26:28.318950  349045 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:26:28.319033  349045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:26:28.383757  349045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-21 20:26:28.371053987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:26:28.383909  349045 docker.go:319] overlay module found
	I1221 20:26:28.386158  349045 out.go:179] * Using the docker driver based on existing profile
	I1221 20:26:28.387267  349045 start.go:309] selected driver: docker
	I1221 20:26:28.387285  349045 start.go:928] validating driver "docker" against &{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:28.387394  349045 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:26:28.388083  349045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:26:28.452534  349045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-21 20:26:28.440765419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:26:28.452814  349045 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:28.452841  349045 cni.go:84] Creating CNI manager for ""
	I1221 20:26:28.452894  349045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:28.452946  349045 start.go:353] cluster config:
	{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:28.455294  349045 out.go:179] * Starting "embed-certs-413073" primary control-plane node in "embed-certs-413073" cluster
	I1221 20:26:28.456605  349045 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:26:28.457837  349045 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:26:28.458961  349045 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:26:28.458999  349045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 20:26:28.459012  349045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:26:28.459022  349045 cache.go:65] Caching tarball of preloaded images
	I1221 20:26:28.459126  349045 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:26:28.459141  349045 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 20:26:28.459294  349045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:26:28.483646  349045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:26:28.483671  349045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:26:28.483693  349045 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:26:28.483740  349045 start.go:360] acquireMachinesLock for embed-certs-413073: {Name:mkd7ba395e71c68e48a93bb569cce5d8b29847bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:26:28.483811  349045 start.go:364] duration metric: took 47.571µs to acquireMachinesLock for "embed-certs-413073"
	I1221 20:26:28.483834  349045 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:26:28.483841  349045 fix.go:54] fixHost starting: 
	I1221 20:26:28.484078  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:28.505358  349045 fix.go:112] recreateIfNeeded on embed-certs-413073: state=Stopped err=<nil>
	W1221 20:26:28.505394  349045 fix.go:138] unexpected machine state, will restart: <nil>
	W1221 20:26:25.351188  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:27.876459  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	I1221 20:26:25.658690  345910 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1221 20:26:25.663742  345910 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1221 20:26:25.665156  345910 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:26:25.665185  345910 api_server.go:131] duration metric: took 1.007516766s to wait for apiserver health ...
	I1221 20:26:25.665204  345910 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:25.669277  345910 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:25.669362  345910 system_pods.go:61] "coredns-7d764666f9-wkztz" [c790011a-9ad3-4344-b9ec-e5f3cfba2f21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:25.669384  345910 system_pods.go:61] "etcd-no-preload-328404" [ea4eeda5-7c80-4ff1-9a63-4d83e93c4398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:25.669405  345910 system_pods.go:61] "kindnet-txb2h" [ff8c4aab-19f6-4e7d-9f4f-e3e499a57017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:25.669418  345910 system_pods.go:61] "kube-apiserver-no-preload-328404" [229781bb-351d-4049-abb6-02f9d6bb3d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:25.669427  345910 system_pods.go:61] "kube-controller-manager-no-preload-328404" [a03a3720-eeef-44f8-8b3d-ccf98acf3f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:25.669436  345910 system_pods.go:61] "kube-proxy-tnpxj" [81c501a3-fe67-425e-b459-5d9e8783d67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:25.669450  345910 system_pods.go:61] "kube-scheduler-no-preload-328404" [50f29152-4dd3-4f93-ba1a-324538708448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:25.669462  345910 system_pods.go:61] "storage-provisioner" [3e9e0ecd-7bb1-456d-97d6-436ccd273c6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:25.669470  345910 system_pods.go:74] duration metric: took 4.2593ms to wait for pod list to return data ...
	I1221 20:26:25.669480  345910 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:25.672042  345910 default_sa.go:45] found service account: "default"
	I1221 20:26:25.672063  345910 default_sa.go:55] duration metric: took 2.57644ms for default service account to be created ...
	I1221 20:26:25.672072  345910 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:25.674803  345910 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:25.674837  345910 system_pods.go:89] "coredns-7d764666f9-wkztz" [c790011a-9ad3-4344-b9ec-e5f3cfba2f21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:25.674847  345910 system_pods.go:89] "etcd-no-preload-328404" [ea4eeda5-7c80-4ff1-9a63-4d83e93c4398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:25.674857  345910 system_pods.go:89] "kindnet-txb2h" [ff8c4aab-19f6-4e7d-9f4f-e3e499a57017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:25.674870  345910 system_pods.go:89] "kube-apiserver-no-preload-328404" [229781bb-351d-4049-abb6-02f9d6bb3d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:25.674885  345910 system_pods.go:89] "kube-controller-manager-no-preload-328404" [a03a3720-eeef-44f8-8b3d-ccf98acf3f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:25.674904  345910 system_pods.go:89] "kube-proxy-tnpxj" [81c501a3-fe67-425e-b459-5d9e8783d67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:25.674916  345910 system_pods.go:89] "kube-scheduler-no-preload-328404" [50f29152-4dd3-4f93-ba1a-324538708448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:25.674927  345910 system_pods.go:89] "storage-provisioner" [3e9e0ecd-7bb1-456d-97d6-436ccd273c6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:25.674937  345910 system_pods.go:126] duration metric: took 2.858367ms to wait for k8s-apps to be running ...
	I1221 20:26:25.674946  345910 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:25.674994  345910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:25.692567  345910 system_svc.go:56] duration metric: took 17.613432ms WaitForService to wait for kubelet
	I1221 20:26:25.692625  345910 kubeadm.go:587] duration metric: took 2.843019767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:25.692650  345910 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:25.696171  345910 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:25.696196  345910 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:25.696214  345910 node_conditions.go:105] duration metric: took 3.549535ms to run NodePressure ...
	I1221 20:26:25.696258  345910 start.go:242] waiting for startup goroutines ...
	I1221 20:26:25.696273  345910 start.go:247] waiting for cluster config update ...
	I1221 20:26:25.696292  345910 start.go:256] writing updated cluster config ...
	I1221 20:26:25.696578  345910 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:25.700995  345910 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:25.705329  345910 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wkztz" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:26:27.711550  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:30.211912  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:27.041802  339032 node_ready.go:57] node "default-k8s-diff-port-766361" has "Ready":"False" status (will retry)
	W1221 20:26:29.538369  339032 node_ready.go:57] node "default-k8s-diff-port-766361" has "Ready":"False" status (will retry)
	I1221 20:26:31.038840  339032 node_ready.go:49] node "default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:31.038875  339032 node_ready.go:38] duration metric: took 12.50377621s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:26:31.038892  339032 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:26:31.038958  339032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:26:31.054404  339032 api_server.go:72] duration metric: took 12.784988284s to wait for apiserver process to appear ...
	I1221 20:26:31.054443  339032 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:26:31.054466  339032 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:26:31.062787  339032 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1221 20:26:31.064052  339032 api_server.go:141] control plane version: v1.34.3
	I1221 20:26:31.064087  339032 api_server.go:131] duration metric: took 9.635216ms to wait for apiserver health ...
	I1221 20:26:31.064097  339032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:31.068373  339032 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:31.068406  339032 system_pods.go:61] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.068414  339032 system_pods.go:61] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.068421  339032 system_pods.go:61] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.068428  339032 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.068433  339032 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.068438  339032 system_pods.go:61] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.068450  339032 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.068459  339032 system_pods.go:61] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:31.068470  339032 system_pods.go:74] duration metric: took 4.365658ms to wait for pod list to return data ...
	I1221 20:26:31.068481  339032 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:31.071303  339032 default_sa.go:45] found service account: "default"
	I1221 20:26:31.071323  339032 default_sa.go:55] duration metric: took 2.831663ms for default service account to be created ...
	I1221 20:26:31.071332  339032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:31.074677  339032 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:31.074711  339032 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.074720  339032 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.074727  339032 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.074733  339032 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.074739  339032 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.074745  339032 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.074750  339032 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.074761  339032 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:31.074793  339032 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1221 20:26:31.357381  339032 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:31.357419  339032 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.357429  339032 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.357449  339032 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.357455  339032 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.357465  339032 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.357477  339032 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.357487  339032 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.357495  339032 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:26:31.357504  339032 system_pods.go:126] duration metric: took 286.165238ms to wait for k8s-apps to be running ...
	I1221 20:26:31.357517  339032 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:31.357569  339032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:31.374162  339032 system_svc.go:56] duration metric: took 16.636072ms WaitForService to wait for kubelet
	I1221 20:26:31.374199  339032 kubeadm.go:587] duration metric: took 13.104782839s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:31.374252  339032 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:31.377689  339032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:31.377718  339032 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:31.377806  339032 node_conditions.go:105] duration metric: took 3.541844ms to run NodePressure ...
	I1221 20:26:31.377821  339032 start.go:242] waiting for startup goroutines ...
	I1221 20:26:31.377832  339032 start.go:247] waiting for cluster config update ...
	I1221 20:26:31.377847  339032 start.go:256] writing updated cluster config ...
	I1221 20:26:31.378180  339032 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:31.382766  339032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:31.386785  339032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:28.506776  349045 out.go:252] * Restarting existing docker container for "embed-certs-413073" ...
	I1221 20:26:28.506853  349045 cli_runner.go:164] Run: docker start embed-certs-413073
	I1221 20:26:28.754220  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:28.772197  349045 kic.go:430] container "embed-certs-413073" state is running.
	I1221 20:26:28.772613  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:28.791483  349045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:26:28.791662  349045 machine.go:94] provisionDockerMachine start ...
	I1221 20:26:28.791717  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:28.811016  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:28.811307  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:28.811325  349045 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:26:28.811830  349045 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41650->127.0.0.1:33124: read: connection reset by peer
	I1221 20:26:31.973490  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:26:31.973519  349045 ubuntu.go:182] provisioning hostname "embed-certs-413073"
	I1221 20:26:31.973592  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:31.998312  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:31.998627  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:31.998655  349045 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-413073 && echo "embed-certs-413073" | sudo tee /etc/hostname
	I1221 20:26:32.169162  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:26:32.169295  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.197522  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:32.197833  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:32.197860  349045 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-413073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-413073/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-413073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:26:32.356078  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:26:32.356105  349045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:26:32.356129  349045 ubuntu.go:190] setting up certificates
	I1221 20:26:32.356139  349045 provision.go:84] configureAuth start
	I1221 20:26:32.356205  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:32.380995  349045 provision.go:143] copyHostCerts
	I1221 20:26:32.381067  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:26:32.381088  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:26:32.381158  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:26:32.381336  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:26:32.381352  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:26:32.381399  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:26:32.381517  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:26:32.381528  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:26:32.381563  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:26:32.381652  349045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.embed-certs-413073 san=[127.0.0.1 192.168.94.2 embed-certs-413073 localhost minikube]
	I1221 20:26:32.479184  349045 provision.go:177] copyRemoteCerts
	I1221 20:26:32.479284  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:26:32.479340  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.505304  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:32.615477  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:26:32.637386  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1221 20:26:32.657941  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 20:26:32.679246  349045 provision.go:87] duration metric: took 323.089087ms to configureAuth
	I1221 20:26:32.679276  349045 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:26:32.679495  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:32.679620  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.704097  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:32.704422  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:32.704452  349045 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:26:32.393030  339032 pod_ready.go:94] pod "coredns-66bc5c9577-bp67f" is "Ready"
	I1221 20:26:32.393060  339032 pod_ready.go:86] duration metric: took 1.006253441s for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.395886  339032 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.400375  339032 pod_ready.go:94] pod "etcd-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.400399  339032 pod_ready.go:86] duration metric: took 4.491012ms for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.403288  339032 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.408032  339032 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.408055  339032 pod_ready.go:86] duration metric: took 4.736601ms for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.410124  339032 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.590191  339032 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.590243  339032 pod_ready.go:86] duration metric: took 180.076227ms for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.790437  339032 pod_ready.go:83] waiting for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.190998  339032 pod_ready.go:94] pod "kube-proxy-w9lgb" is "Ready"
	I1221 20:26:33.191030  339032 pod_ready.go:86] duration metric: took 400.559576ms for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.390945  339032 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.790606  339032 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:33.790642  339032 pod_ready.go:86] duration metric: took 399.665202ms for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.790658  339032 pod_ready.go:40] duration metric: took 2.40784924s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:33.840839  339032 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:26:33.865033  339032 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-766361" cluster and "default" namespace by default
	W1221 20:26:30.348013  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:32.348358  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:32.212243  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:34.711111  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:26:34.122259  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:26:34.122300  349045 machine.go:97] duration metric: took 5.330623534s to provisionDockerMachine
	I1221 20:26:34.122318  349045 start.go:293] postStartSetup for "embed-certs-413073" (driver="docker")
	I1221 20:26:34.122332  349045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:26:34.122408  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:26:34.122462  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.145201  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.245112  349045 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:26:34.248686  349045 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:26:34.248719  349045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:26:34.248731  349045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:26:34.248796  349045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:26:34.248891  349045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:26:34.248979  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:26:34.257867  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:34.276308  349045 start.go:296] duration metric: took 153.975025ms for postStartSetup
	I1221 20:26:34.276373  349045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:26:34.276431  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.295030  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.390399  349045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:26:34.395009  349045 fix.go:56] duration metric: took 5.911162905s for fixHost
	I1221 20:26:34.395034  349045 start.go:83] releasing machines lock for "embed-certs-413073", held for 5.911210955s
	I1221 20:26:34.395103  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:34.415658  349045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:26:34.415718  349045 ssh_runner.go:195] Run: cat /version.json
	I1221 20:26:34.415753  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.415772  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.437353  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.438191  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.532205  349045 ssh_runner.go:195] Run: systemctl --version
	I1221 20:26:34.590519  349045 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:26:34.626292  349045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:26:34.631256  349045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:26:34.631323  349045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:26:34.640212  349045 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:26:34.640261  349045 start.go:496] detecting cgroup driver to use...
	I1221 20:26:34.640296  349045 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:26:34.640339  349045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:26:34.655152  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:26:34.666935  349045 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:26:34.666995  349045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:26:34.681162  349045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:26:34.694205  349045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:26:34.773836  349045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:26:34.866636  349045 docker.go:234] disabling docker service ...
	I1221 20:26:34.866704  349045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:26:34.883877  349045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:26:34.897764  349045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:26:34.992795  349045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:26:35.089519  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:26:35.101885  349045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:26:35.117012  349045 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:26:35.117071  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.125693  349045 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:26:35.125742  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.135514  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.144405  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.153105  349045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:26:35.161280  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.170948  349045 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.181393  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.190217  349045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:26:35.197559  349045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:26:35.204474  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:35.282887  349045 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:26:35.491553  349045 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:26:35.491642  349045 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:26:35.496405  349045 start.go:564] Will wait 60s for crictl version
	I1221 20:26:35.496470  349045 ssh_runner.go:195] Run: which crictl
	I1221 20:26:35.500988  349045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:26:35.525158  349045 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:26:35.525280  349045 ssh_runner.go:195] Run: crio --version
	I1221 20:26:35.553291  349045 ssh_runner.go:195] Run: crio --version
	I1221 20:26:35.582458  349045 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:26:35.583603  349045 cli_runner.go:164] Run: docker network inspect embed-certs-413073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:26:35.601409  349045 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1221 20:26:35.605668  349045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:35.616349  349045 kubeadm.go:884] updating cluster {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:26:35.616474  349045 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:26:35.616526  349045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:35.647904  349045 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:35.647924  349045 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:26:35.647969  349045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:35.672746  349045 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:35.672771  349045 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:26:35.672778  349045 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.3 crio true true} ...
	I1221 20:26:35.672870  349045 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-413073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:26:35.672941  349045 ssh_runner.go:195] Run: crio config
	I1221 20:26:35.721006  349045 cni.go:84] Creating CNI manager for ""
	I1221 20:26:35.721028  349045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:35.721041  349045 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:26:35.721060  349045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-413073 NodeName:embed-certs-413073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:26:35.721172  349045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-413073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:26:35.721262  349045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:26:35.729396  349045 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:26:35.729469  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:26:35.737076  349045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1221 20:26:35.749458  349045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:26:35.762014  349045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1221 20:26:35.776095  349045 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:26:35.779907  349045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:35.790127  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:35.871764  349045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:35.894457  349045 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073 for IP: 192.168.94.2
	I1221 20:26:35.894479  349045 certs.go:195] generating shared ca certs ...
	I1221 20:26:35.894498  349045 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:35.894692  349045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:26:35.894757  349045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:26:35.894773  349045 certs.go:257] generating profile certs ...
	I1221 20:26:35.894903  349045 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.key
	I1221 20:26:35.894982  349045 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key.865f7206
	I1221 20:26:35.895039  349045 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key
	I1221 20:26:35.895195  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:26:35.895255  349045 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:26:35.895269  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:26:35.895316  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:26:35.895359  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:26:35.895394  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:26:35.895460  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:35.896857  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:26:35.918148  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:26:35.937363  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:26:35.956791  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:26:35.980319  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1221 20:26:35.998307  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 20:26:36.016864  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:26:36.035412  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:26:36.052147  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:26:36.068514  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:26:36.085864  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:26:36.104067  349045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:26:36.116311  349045 ssh_runner.go:195] Run: openssl version
	I1221 20:26:36.122281  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.129549  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:26:36.137800  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.141357  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.141422  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.177095  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:26:36.184709  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.191985  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:26:36.199039  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.202890  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.202936  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.237305  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:26:36.244698  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.251690  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:26:36.258834  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.262601  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.262651  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.297116  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:26:36.304458  349045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:26:36.308039  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:26:36.343506  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:26:36.379330  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:26:36.419930  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:26:36.467926  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:26:36.517528  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:26:36.569807  349045 kubeadm.go:401] StartCluster: {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:36.569934  349045 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:26:36.570012  349045 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:26:36.603596  349045 cri.go:96] found id: "020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce"
	I1221 20:26:36.603620  349045 cri.go:96] found id: "9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676"
	I1221 20:26:36.603626  349045 cri.go:96] found id: "c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536"
	I1221 20:26:36.603631  349045 cri.go:96] found id: "d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f"
	I1221 20:26:36.603635  349045 cri.go:96] found id: ""
	I1221 20:26:36.603694  349045 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:26:36.615256  349045 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:36Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:26:36.615332  349045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:26:36.623063  349045 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:26:36.623081  349045 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:26:36.623168  349045 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:26:36.630509  349045 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:26:36.631520  349045 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-413073" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:36.632152  349045 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-413073" cluster setting kubeconfig missing "embed-certs-413073" context setting]
	I1221 20:26:36.633238  349045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.635239  349045 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:26:36.642696  349045 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1221 20:26:36.642724  349045 kubeadm.go:602] duration metric: took 19.637661ms to restartPrimaryControlPlane
	I1221 20:26:36.642733  349045 kubeadm.go:403] duration metric: took 72.941162ms to StartCluster
	I1221 20:26:36.642749  349045 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.642804  349045 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:36.644942  349045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.645178  349045 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:26:36.645266  349045 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:26:36.645373  349045 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-413073"
	I1221 20:26:36.645392  349045 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-413073"
	W1221 20:26:36.645404  349045 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:26:36.645407  349045 addons.go:70] Setting dashboard=true in profile "embed-certs-413073"
	I1221 20:26:36.645432  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.645434  349045 addons.go:239] Setting addon dashboard=true in "embed-certs-413073"
	I1221 20:26:36.645440  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:36.645468  349045 addons.go:70] Setting default-storageclass=true in profile "embed-certs-413073"
	W1221 20:26:36.645444  349045 addons.go:248] addon dashboard should already be in state true
	I1221 20:26:36.645494  349045 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-413073"
	I1221 20:26:36.645510  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.645796  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.645906  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.645963  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.647026  349045 out.go:179] * Verifying Kubernetes components...
	I1221 20:26:36.648142  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:36.670830  349045 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1221 20:26:36.671909  349045 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:26:36.671982  349045 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:26:36.672921  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:26:36.672938  349045 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:26:36.672981  349045 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:36.672995  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.672999  349045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:26:36.673047  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.673322  349045 addons.go:239] Setting addon default-storageclass=true in "embed-certs-413073"
	W1221 20:26:36.673343  349045 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:26:36.673379  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.673831  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.713604  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.716336  349045 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:36.716359  349045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:26:36.716417  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.717556  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.740725  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.799504  349045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:36.813392  349045 node_ready.go:35] waiting up to 6m0s for node "embed-certs-413073" to be "Ready" ...
	I1221 20:26:36.827736  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:36.831307  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:26:36.831331  349045 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:26:36.847340  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:26:36.847361  349045 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:26:36.857774  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:36.864116  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:26:36.864135  349045 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:26:36.880513  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:26:36.880541  349045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:26:36.895508  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:26:36.895533  349045 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:26:36.909454  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:26:36.909478  349045 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:26:36.923439  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:26:36.923466  349045 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:26:36.936237  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:26:36.936258  349045 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:26:36.948470  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:36.948487  349045 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:26:36.960580  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:38.055702  349045 node_ready.go:49] node "embed-certs-413073" is "Ready"
	I1221 20:26:38.055739  349045 node_ready.go:38] duration metric: took 1.242302482s for node "embed-certs-413073" to be "Ready" ...
	I1221 20:26:38.055756  349045 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:26:38.055807  349045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:26:38.565557  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.70771546s)
	I1221 20:26:38.566433  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.605814489s)
	I1221 20:26:38.566655  349045 api_server.go:72] duration metric: took 1.921448818s to wait for apiserver process to appear ...
	I1221 20:26:38.566678  349045 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:26:38.566680  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.738896864s)
	I1221 20:26:38.566700  349045 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:26:38.571884  349045 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-413073 addons enable metrics-server
	
	I1221 20:26:38.572921  349045 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:26:38.573011  349045 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:26:38.580646  349045 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1221 20:26:34.847391  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:36.849413  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:39.348748  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:36.714889  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:39.210974  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:26:38.581622  349045 addons.go:530] duration metric: took 1.936370568s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:26:39.066797  349045 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:26:39.071340  349045 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:26:39.071363  349045 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:26:39.567578  349045 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:26:39.572904  349045 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1221 20:26:39.574079  349045 api_server.go:141] control plane version: v1.34.3
	I1221 20:26:39.574110  349045 api_server.go:131] duration metric: took 1.007425087s to wait for apiserver health ...
	I1221 20:26:39.574121  349045 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:39.577965  349045 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:39.578010  349045 system_pods.go:61] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:39.578018  349045 system_pods.go:61] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:39.578025  349045 system_pods.go:61] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:39.578034  349045 system_pods.go:61] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:39.578041  349045 system_pods.go:61] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:39.578049  349045 system_pods.go:61] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:39.578054  349045 system_pods.go:61] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:39.578073  349045 system_pods.go:61] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:39.578084  349045 system_pods.go:74] duration metric: took 3.956948ms to wait for pod list to return data ...
	I1221 20:26:39.578093  349045 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:39.580020  349045 default_sa.go:45] found service account: "default"
	I1221 20:26:39.580039  349045 default_sa.go:55] duration metric: took 1.940442ms for default service account to be created ...
	I1221 20:26:39.580046  349045 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:39.582151  349045 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:39.582185  349045 system_pods.go:89] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:39.582196  349045 system_pods.go:89] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:39.582215  349045 system_pods.go:89] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:39.582239  349045 system_pods.go:89] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:39.582250  349045 system_pods.go:89] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:39.582261  349045 system_pods.go:89] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:39.582266  349045 system_pods.go:89] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:39.582275  349045 system_pods.go:89] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:39.582281  349045 system_pods.go:126] duration metric: took 2.230431ms to wait for k8s-apps to be running ...
	I1221 20:26:39.582290  349045 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:39.582327  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:39.595168  349045 system_svc.go:56] duration metric: took 12.869871ms WaitForService to wait for kubelet
	I1221 20:26:39.595204  349045 kubeadm.go:587] duration metric: took 2.949997064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:39.595275  349045 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:39.597579  349045 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:39.597599  349045 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:39.597612  349045 node_conditions.go:105] duration metric: took 2.327211ms to run NodePressure ...
	I1221 20:26:39.597621  349045 start.go:242] waiting for startup goroutines ...
	I1221 20:26:39.597629  349045 start.go:247] waiting for cluster config update ...
	I1221 20:26:39.597641  349045 start.go:256] writing updated cluster config ...
	I1221 20:26:39.597849  349045 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:39.601386  349045 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:39.604097  349045 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lvwlf" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:26:41.609005  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:26:41.846863  341446 pod_ready.go:94] pod "coredns-5dd5756b68-v285b" is "Ready"
	I1221 20:26:41.846892  341446 pod_ready.go:86] duration metric: took 32.005636894s for pod "coredns-5dd5756b68-v285b" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.849729  341446 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.854590  341446 pod_ready.go:94] pod "etcd-old-k8s-version-699289" is "Ready"
	I1221 20:26:41.854623  341446 pod_ready.go:86] duration metric: took 4.871295ms for pod "etcd-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.857354  341446 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.860875  341446 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-699289" is "Ready"
	I1221 20:26:41.860893  341446 pod_ready.go:86] duration metric: took 3.516703ms for pod "kube-apiserver-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.863111  341446 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.046017  341446 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-699289" is "Ready"
	I1221 20:26:42.046051  341446 pod_ready.go:86] duration metric: took 182.920409ms for pod "kube-controller-manager-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.246045  341446 pod_ready.go:83] waiting for pod "kube-proxy-hsngj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.645985  341446 pod_ready.go:94] pod "kube-proxy-hsngj" is "Ready"
	I1221 20:26:42.646015  341446 pod_ready.go:86] duration metric: took 399.94762ms for pod "kube-proxy-hsngj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.847481  341446 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:43.245686  341446 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-699289" is "Ready"
	I1221 20:26:43.245717  341446 pod_ready.go:86] duration metric: took 398.204753ms for pod "kube-scheduler-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:43.245732  341446 pod_ready.go:40] duration metric: took 33.412666685s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:43.301870  341446 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1221 20:26:43.303149  341446 out.go:203] 
	W1221 20:26:43.304309  341446 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1221 20:26:43.305406  341446 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1221 20:26:43.306573  341446 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-699289" cluster and "default" namespace by default
	W1221 20:26:41.710083  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:43.712306  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:43.610081  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:45.610256  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:48.108857  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:46.212011  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:48.710356  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:50.109521  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:52.609030  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:50.711369  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:53.210589  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 21 20:26:27 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:27.456824198Z" level=info msg="Created container 26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm/kubernetes-dashboard" id=8d51b8e8-9db0-46fc-bc24-daf3a1325c3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:27 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:27.457338604Z" level=info msg="Starting container: 26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6" id=bd61fd57-a26c-4f83-a962-0ba8c00b6565 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:27 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:27.458895069Z" level=info msg="Started container" PID=1732 containerID=26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm/kubernetes-dashboard id=bd61fd57-a26c-4f83-a962-0ba8c00b6565 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca1864c1f5f2574630d417c22c6cbac07b8889ae04398d35572259dd7125a8fe
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.774836101Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0d5ed1e5-73ad-45dd-a982-97de5a36d6ff name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.775765549Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8639f7f8-2d93-4158-a192-939f0175b1e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.777067736Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8799b430-d075-45ec-aaa0-1dfe10033a80 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.777277023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782482848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782661073Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e74d62c2b7891d6334262dbef4d57e4309086136d2098ca7b448b6d58daa3cc9/merged/etc/passwd: no such file or directory"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782695281Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e74d62c2b7891d6334262dbef4d57e4309086136d2098ca7b448b6d58daa3cc9/merged/etc/group: no such file or directory"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782977065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.806140344Z" level=info msg="Created container fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1: kube-system/storage-provisioner/storage-provisioner" id=8799b430-d075-45ec-aaa0-1dfe10033a80 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.80677734Z" level=info msg="Starting container: fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1" id=8cde22e9-2742-4885-807d-1bad20895285 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.808616351Z" level=info msg="Started container" PID=1755 containerID=fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1 description=kube-system/storage-provisioner/storage-provisioner id=8cde22e9-2742-4885-807d-1bad20895285 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b1aaa26f89056b45a62745c4d8398ec57e6789f2ff259b9397541d42052ffa5
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.666774352Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e00ee0e6-86fc-4498-bf04-a4e41d395711 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.667900026Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65ef8591-0419-41e8-bf14-7552adbb4816 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.669126745Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper" id=57c2091c-2ad4-42f5-9684-575595a77fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.669339705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.677541673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.678355177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.710814948Z" level=info msg="Created container 65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper" id=57c2091c-2ad4-42f5-9684-575595a77fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.712138512Z" level=info msg="Starting container: 65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef" id=7a8a44c5-edfe-4967-9f03-78163dcf036e name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.714832219Z" level=info msg="Started container" PID=1773 containerID=65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper id=7a8a44c5-edfe-4967-9f03-78163dcf036e name=/runtime.v1.RuntimeService/StartContainer sandboxID=49f748e9aa7147180a0d4b212198c27f4a6aea56866cc9b661e4a8be2d5204fc
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.786773308Z" level=info msg="Removing container: 626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b" id=10184fdf-ecdc-4ecd-9b4f-f6addd3d9b3b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.799320676Z" level=info msg="Removed container 626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper" id=10184fdf-ecdc-4ecd-9b4f-f6addd3d9b3b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	65bf231f6b72e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   49f748e9aa714       dashboard-metrics-scraper-5f989dc9cf-vj972       kubernetes-dashboard
	fd913503e4ec5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   2b1aaa26f8905       storage-provisioner                              kube-system
	26d63f5746b09       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   30 seconds ago      Running             kubernetes-dashboard        0                   ca1864c1f5f25       kubernetes-dashboard-8694d4445c-72bcm            kubernetes-dashboard
	1460cc6bb57e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   723f2aac14aa8       coredns-5dd5756b68-v285b                         kube-system
	40c6ee8927127       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   814f2a9392fc0       busybox                                          default
	d6f65e64c24a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   2b1aaa26f8905       storage-provisioner                              kube-system
	a0d586c455cc3       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           49 seconds ago      Running             kindnet-cni                 0                   fa9ac78100246       kindnet-g5mb8                                    kube-system
	33c6adad84864       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   2bb99b42d1c06       kube-proxy-hsngj                                 kube-system
	d1fb79aa0d924       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   a5a90cf75a190       kube-controller-manager-old-k8s-version-699289   kube-system
	f568d82d77c18       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   302a8cab485fe       etcd-old-k8s-version-699289                      kube-system
	5fc8d02fce783       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   30b1de32413a9       kube-apiserver-old-k8s-version-699289            kube-system
	64bce6865fb1a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   af7d59ca88611       kube-scheduler-old-k8s-version-699289            kube-system
	
	
	==> coredns [1460cc6bb57e2081694d9423affc1178017a03dff842225c30fab505d7d2a95b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43636 - 61565 "HINFO IN 7723653539803018527.2493573840787226081. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.474792149s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-699289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-699289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=old-k8s-version-699289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-699289
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:26:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:25:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-699289
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                5608b3c9-c686-468f-89f8-92ad8cb9ae20
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-v285b                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-old-k8s-version-699289                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-g5mb8                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-old-k8s-version-699289             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-699289    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-hsngj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-old-k8s-version-699289             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vj972        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-72bcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node old-k8s-version-699289 event: Registered Node old-k8s-version-699289 in Controller
	  Normal  NodeReady                88s                kubelet          Node old-k8s-version-699289 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x9 over 53s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x7 over 53s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node old-k8s-version-699289 event: Registered Node old-k8s-version-699289 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [f568d82d77c18300e44677d66b6b0bc4c5ba3b7d94a1b4f5b47db27571852dc4] <==
	{"level":"info","ts":"2025-12-21T20:26:06.221Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-21T20:26:06.221086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-21T20:26:06.221172Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-21T20:26:06.221291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:26:06.221335Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:26:06.22359Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-21T20:26:06.223747Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:26:06.224199Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:26:06.223846Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-21T20:26:06.22387Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-21T20:26:07.211703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:07.211751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:07.211767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:07.211779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.211785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.211793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.2118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.214628Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-699289 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-21T20:26:07.214664Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:07.214675Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:07.21484Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:07.21487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:07.215688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-21T20:26:07.215733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:26:27.855343Z","caller":"traceutil/trace.go:171","msg":"trace[147729461] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"104.530903ms","start":"2025-12-21T20:26:27.750789Z","end":"2025-12-21T20:26:27.85532Z","steps":["trace[147729461] 'process raft request'  (duration: 104.253761ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:26:58 up  1:09,  0 user,  load average: 4.40, 3.89, 2.76
	Linux old-k8s-version-699289 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0d586c455cc3e950fec3abf57e8834f990d21f159f890449ee01006af8b5ea3] <==
	I1221 20:26:09.358948       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:26:09.359219       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1221 20:26:09.359434       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:26:09.359461       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:26:09.359483       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:26:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:26:09.593493       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:26:09.593616       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:26:09.657635       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:26:09.657885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:26:09.957888       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:26:09.957920       1 metrics.go:72] Registering metrics
	I1221 20:26:09.958172       1 controller.go:711] "Syncing nftables rules"
	I1221 20:26:19.593425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:19.593508       1 main.go:301] handling current node
	I1221 20:26:29.594067       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:29.594108       1 main.go:301] handling current node
	I1221 20:26:39.593757       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:39.593790       1 main.go:301] handling current node
	I1221 20:26:49.598472       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:49.598505       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5fc8d02fce78360a2559c2f88b3c8e6e49a518cd94d46fcb3f5554e34a4b6559] <==
	I1221 20:26:08.337564       1 autoregister_controller.go:141] Starting autoregister controller
	I1221 20:26:08.337570       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 20:26:08.337577       1 cache.go:39] Caches are synced for autoregister controller
	I1221 20:26:08.337617       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1221 20:26:08.337664       1 shared_informer.go:318] Caches are synced for configmaps
	I1221 20:26:08.337700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1221 20:26:08.338062       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:08.343033       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:26:08.356277       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1221 20:26:09.245589       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:26:09.593945       1 controller.go:624] quota admission added evaluator for: namespaces
	I1221 20:26:09.628335       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1221 20:26:09.653131       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:26:09.662976       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:26:09.671728       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1221 20:26:09.716662       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.126.245"}
	I1221 20:26:09.736662       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.136.92"}
	E1221 20:26:18.337757       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	I1221 20:26:21.291943       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:26:21.300914       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1221 20:26:21.337122       1 controller.go:624] quota admission added evaluator for: endpoints
	E1221 20:26:28.338252       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:38.338805       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:48.339919       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:58.341025       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [d1fb79aa0d924fff93f096054d4a46f8a8baf20e2df92302469d3c1b72a950b5] <==
	I1221 20:26:21.376858       1 taint_manager.go:211] "Sending events to api server"
	I1221 20:26:21.376854       1 event.go:307] "Event occurred" object="old-k8s-version-699289" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-699289 event: Registered Node old-k8s-version-699289 in Controller"
	I1221 20:26:21.376875       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1221 20:26:21.415522       1 shared_informer.go:318] Caches are synced for disruption
	I1221 20:26:21.417773       1 shared_informer.go:318] Caches are synced for stateful set
	I1221 20:26:21.424019       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1221 20:26:21.433473       1 shared_informer.go:318] Caches are synced for resource quota
	I1221 20:26:21.448169       1 shared_informer.go:318] Caches are synced for resource quota
	I1221 20:26:21.452696       1 shared_informer.go:318] Caches are synced for PVC protection
	I1221 20:26:21.476149       1 shared_informer.go:318] Caches are synced for expand
	I1221 20:26:21.481506       1 shared_informer.go:318] Caches are synced for persistent volume
	I1221 20:26:21.493967       1 shared_informer.go:318] Caches are synced for ephemeral
	I1221 20:26:21.502615       1 shared_informer.go:318] Caches are synced for attach detach
	I1221 20:26:21.863535       1 shared_informer.go:318] Caches are synced for garbage collector
	I1221 20:26:21.875738       1 shared_informer.go:318] Caches are synced for garbage collector
	I1221 20:26:21.875774       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1221 20:26:24.740768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.34µs"
	I1221 20:26:25.747747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.88µs"
	I1221 20:26:26.768488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="141.298µs"
	I1221 20:26:27.876221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.175413ms"
	I1221 20:26:27.877099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.986µs"
	I1221 20:26:41.730380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.633704ms"
	I1221 20:26:41.730513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.346µs"
	I1221 20:26:42.801460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.494µs"
	I1221 20:26:51.648831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.63µs"
	
	
	==> kube-proxy [33c6adad84864cf2665448db090a10c1199353f3e0dc0eea36e033cd09d820ea] <==
	I1221 20:26:09.213044       1 server_others.go:69] "Using iptables proxy"
	I1221 20:26:09.227980       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1221 20:26:09.264559       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:26:09.268664       1 server_others.go:152] "Using iptables Proxier"
	I1221 20:26:09.268746       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1221 20:26:09.268774       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1221 20:26:09.268833       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1221 20:26:09.269131       1 server.go:846] "Version info" version="v1.28.0"
	I1221 20:26:09.269180       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:09.269996       1 config.go:188] "Starting service config controller"
	I1221 20:26:09.270136       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1221 20:26:09.270194       1 config.go:315] "Starting node config controller"
	I1221 20:26:09.270218       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1221 20:26:09.271393       1 config.go:97] "Starting endpoint slice config controller"
	I1221 20:26:09.271430       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1221 20:26:09.370698       1 shared_informer.go:318] Caches are synced for node config
	I1221 20:26:09.370810       1 shared_informer.go:318] Caches are synced for service config
	I1221 20:26:09.375997       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [64bce6865fb1a19663efbee434032c3951a1e1d68bb578e204142222a2c6880d] <==
	I1221 20:26:06.870515       1 serving.go:348] Generated self-signed cert in-memory
	W1221 20:26:08.272311       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:26:08.272343       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:26:08.272356       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:26:08.272366       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:26:08.307856       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1221 20:26:08.308019       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:08.309834       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:08.309920       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1221 20:26:08.311710       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1221 20:26:08.311748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1221 20:26:08.410289       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.336873     732 topology_manager.go:215] "Topology Admit Handler" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-vj972"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452631     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d5qd\" (UniqueName: \"kubernetes.io/projected/432bee5e-70b1-42af-8b1c-e6f832fcc048-kube-api-access-4d5qd\") pod \"kubernetes-dashboard-8694d4445c-72bcm\" (UID: \"432bee5e-70b1-42af-8b1c-e6f832fcc048\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452688     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/432bee5e-70b1-42af-8b1c-e6f832fcc048-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-72bcm\" (UID: \"432bee5e-70b1-42af-8b1c-e6f832fcc048\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452724     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/06e27cf5-d3c0-4f8b-98eb-01f030181bd6-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vj972\" (UID: \"06e27cf5-d3c0-4f8b-98eb-01f030181bd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452877     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669g7\" (UniqueName: \"kubernetes.io/projected/06e27cf5-d3c0-4f8b-98eb-01f030181bd6-kube-api-access-669g7\") pod \"dashboard-metrics-scraper-5f989dc9cf-vj972\" (UID: \"06e27cf5-d3c0-4f8b-98eb-01f030181bd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972"
	Dec 21 20:26:24 old-k8s-version-699289 kubelet[732]: I1221 20:26:24.730410     732 scope.go:117] "RemoveContainer" containerID="15ea0c4c73c159ac337454513c9c54562de164d73f528042020447341e937d3f"
	Dec 21 20:26:25 old-k8s-version-699289 kubelet[732]: I1221 20:26:25.734742     732 scope.go:117] "RemoveContainer" containerID="15ea0c4c73c159ac337454513c9c54562de164d73f528042020447341e937d3f"
	Dec 21 20:26:25 old-k8s-version-699289 kubelet[732]: I1221 20:26:25.734939     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:25 old-k8s-version-699289 kubelet[732]: E1221 20:26:25.735330     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:26 old-k8s-version-699289 kubelet[732]: I1221 20:26:26.739045     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:26 old-k8s-version-699289 kubelet[732]: E1221 20:26:26.739510     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:31 old-k8s-version-699289 kubelet[732]: I1221 20:26:31.639073     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:31 old-k8s-version-699289 kubelet[732]: E1221 20:26:31.639382     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:39 old-k8s-version-699289 kubelet[732]: I1221 20:26:39.774411     732 scope.go:117] "RemoveContainer" containerID="d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc"
	Dec 21 20:26:39 old-k8s-version-699289 kubelet[732]: I1221 20:26:39.786244     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm" podStartSLOduration=13.02867781 podCreationTimestamp="2025-12-21 20:26:21 +0000 UTC" firstStartedPulling="2025-12-21 20:26:21.663127711 +0000 UTC m=+16.087679725" lastFinishedPulling="2025-12-21 20:26:27.420618116 +0000 UTC m=+21.845170134" observedRunningTime="2025-12-21 20:26:27.857352383 +0000 UTC m=+22.281904417" watchObservedRunningTime="2025-12-21 20:26:39.786168219 +0000 UTC m=+34.210720250"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: I1221 20:26:42.665923     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: I1221 20:26:42.785458     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: I1221 20:26:42.785805     732 scope.go:117] "RemoveContainer" containerID="65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: E1221 20:26:42.786157     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:51 old-k8s-version-699289 kubelet[732]: I1221 20:26:51.639629     732 scope.go:117] "RemoveContainer" containerID="65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	Dec 21 20:26:51 old-k8s-version-699289 kubelet[732]: E1221 20:26:51.639956     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: kubelet.service: Consumed 1.425s CPU time.
	
	
	==> kubernetes-dashboard [26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6] <==
	2025/12/21 20:26:27 Starting overwatch
	2025/12/21 20:26:27 Using namespace: kubernetes-dashboard
	2025/12/21 20:26:27 Using in-cluster config to connect to apiserver
	2025/12/21 20:26:27 Using secret token for csrf signing
	2025/12/21 20:26:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:26:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:26:27 Successful initial request to the apiserver, version: v1.28.0
	2025/12/21 20:26:27 Generating JWE encryption key
	2025/12/21 20:26:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:26:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:26:27 Initializing JWE encryption key from synchronized object
	2025/12/21 20:26:27 Creating in-cluster Sidecar client
	2025/12/21 20:26:27 Serving insecurely on HTTP port: 9090
	2025/12/21 20:26:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc] <==
	I1221 20:26:09.161606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:26:39.168393       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1] <==
	I1221 20:26:39.820936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:26:39.828713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:26:39.828756       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1221 20:26:57.224532       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:26:57.224666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-699289_2d7fc954-4fb1-4b88-84cf-28a19fe87dff!
	I1221 20:26:57.224665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b7387ac-0eac-492c-9220-7a6071dd4756", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-699289_2d7fc954-4fb1-4b88-84cf-28a19fe87dff became leader
	I1221 20:26:57.324857       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-699289_2d7fc954-4fb1-4b88-84cf-28a19fe87dff!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-699289 -n old-k8s-version-699289
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-699289 -n old-k8s-version-699289: exit status 2 (324.595167ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-699289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-699289
helpers_test.go:244: (dbg) docker inspect old-k8s-version-699289:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3",
	        "Created": "2025-12-21T20:24:47.982475594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:25:59.574578126Z",
	            "FinishedAt": "2025-12-21T20:25:58.720206224Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/hosts",
	        "LogPath": "/var/lib/docker/containers/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3/e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3-json.log",
	        "Name": "/old-k8s-version-699289",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-699289:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-699289",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e26e2b356a85424e6bc3362dbe1e1e0e93a801382350b589e88219c86a2c22d3",
	                "LowerDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9c3d83531e288ce460c5322d51162724d949defb265f968bd7824419305c3f3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-699289",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-699289/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-699289",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-699289",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-699289",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "44f5a444d357151aade215d64d8fbf08a5f09ecad4d17a4d6f7120f032080072",
	            "SandboxKey": "/var/run/docker/netns/44f5a444d357",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-699289": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "99f5907a172c3f93121569e27574257a2eb119dd81f153d568f418838cd89542",
	                    "EndpointID": "e97be30236fbfb58a7f12abff0422eafca376eb3de07261f9bedf4965af472e0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:ec:3c:0c:72:9a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-699289",
	                        "e26e2b356a85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289: exit status 2 (317.900132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-699289 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-699289 logs -n 25: (1.052163045s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-149976 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo containerd config dump                                                                                                                                                                                                  │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ ssh     │ -p bridge-149976 sudo crio config                                                                                                                                                                                                             │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p bridge-149976                                                                                                                                                                                                                              │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p disable-driver-mounts-903813                                                                                                                                                                                                               │ disable-driver-mounts-903813 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p no-preload-328404 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-766361 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ image   │ old-k8s-version-699289 image list --format=json                                                                                                                                                                                               │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:26:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:26:28.281119  349045 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:26:28.281492  349045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:28.281537  349045 out.go:374] Setting ErrFile to fd 2...
	I1221 20:26:28.281548  349045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:26:28.282030  349045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:26:28.282872  349045 out.go:368] Setting JSON to false
	I1221 20:26:28.284367  349045 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4137,"bootTime":1766344651,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:26:28.284471  349045 start.go:143] virtualization: kvm guest
	I1221 20:26:28.286327  349045 out.go:179] * [embed-certs-413073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:26:28.287913  349045 notify.go:221] Checking for updates...
	I1221 20:26:28.287922  349045 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:26:28.288955  349045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:26:28.290004  349045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:28.291148  349045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:26:28.292120  349045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:26:28.293183  349045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:26:28.294636  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:28.295218  349045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:26:28.318950  349045 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:26:28.319033  349045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:26:28.383757  349045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-21 20:26:28.371053987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:26:28.383909  349045 docker.go:319] overlay module found
	I1221 20:26:28.386158  349045 out.go:179] * Using the docker driver based on existing profile
	I1221 20:26:28.387267  349045 start.go:309] selected driver: docker
	I1221 20:26:28.387285  349045 start.go:928] validating driver "docker" against &{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:28.387394  349045 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:26:28.388083  349045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:26:28.452534  349045 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-21 20:26:28.440765419 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:26:28.452814  349045 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:28.452841  349045 cni.go:84] Creating CNI manager for ""
	I1221 20:26:28.452894  349045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:28.452946  349045 start.go:353] cluster config:
	{Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:28.455294  349045 out.go:179] * Starting "embed-certs-413073" primary control-plane node in "embed-certs-413073" cluster
	I1221 20:26:28.456605  349045 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:26:28.457837  349045 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:26:28.458961  349045 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:26:28.458999  349045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1221 20:26:28.459012  349045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:26:28.459022  349045 cache.go:65] Caching tarball of preloaded images
	I1221 20:26:28.459126  349045 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:26:28.459141  349045 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1221 20:26:28.459294  349045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:26:28.483646  349045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:26:28.483671  349045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:26:28.483693  349045 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:26:28.483740  349045 start.go:360] acquireMachinesLock for embed-certs-413073: {Name:mkd7ba395e71c68e48a93bb569cce5d8b29847bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:26:28.483811  349045 start.go:364] duration metric: took 47.571µs to acquireMachinesLock for "embed-certs-413073"
	I1221 20:26:28.483834  349045 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:26:28.483841  349045 fix.go:54] fixHost starting: 
	I1221 20:26:28.484078  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:28.505358  349045 fix.go:112] recreateIfNeeded on embed-certs-413073: state=Stopped err=<nil>
	W1221 20:26:28.505394  349045 fix.go:138] unexpected machine state, will restart: <nil>
	W1221 20:26:25.351188  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:27.876459  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	I1221 20:26:25.658690  345910 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1221 20:26:25.663742  345910 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1221 20:26:25.665156  345910 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:26:25.665185  345910 api_server.go:131] duration metric: took 1.007516766s to wait for apiserver health ...
	I1221 20:26:25.665204  345910 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:25.669277  345910 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:25.669362  345910 system_pods.go:61] "coredns-7d764666f9-wkztz" [c790011a-9ad3-4344-b9ec-e5f3cfba2f21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:25.669384  345910 system_pods.go:61] "etcd-no-preload-328404" [ea4eeda5-7c80-4ff1-9a63-4d83e93c4398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:25.669405  345910 system_pods.go:61] "kindnet-txb2h" [ff8c4aab-19f6-4e7d-9f4f-e3e499a57017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:25.669418  345910 system_pods.go:61] "kube-apiserver-no-preload-328404" [229781bb-351d-4049-abb6-02f9d6bb3d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:25.669427  345910 system_pods.go:61] "kube-controller-manager-no-preload-328404" [a03a3720-eeef-44f8-8b3d-ccf98acf3f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:25.669436  345910 system_pods.go:61] "kube-proxy-tnpxj" [81c501a3-fe67-425e-b459-5d9e8783d67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:25.669450  345910 system_pods.go:61] "kube-scheduler-no-preload-328404" [50f29152-4dd3-4f93-ba1a-324538708448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:25.669462  345910 system_pods.go:61] "storage-provisioner" [3e9e0ecd-7bb1-456d-97d6-436ccd273c6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:25.669470  345910 system_pods.go:74] duration metric: took 4.2593ms to wait for pod list to return data ...
	I1221 20:26:25.669480  345910 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:25.672042  345910 default_sa.go:45] found service account: "default"
	I1221 20:26:25.672063  345910 default_sa.go:55] duration metric: took 2.57644ms for default service account to be created ...
	I1221 20:26:25.672072  345910 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:25.674803  345910 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:25.674837  345910 system_pods.go:89] "coredns-7d764666f9-wkztz" [c790011a-9ad3-4344-b9ec-e5f3cfba2f21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:25.674847  345910 system_pods.go:89] "etcd-no-preload-328404" [ea4eeda5-7c80-4ff1-9a63-4d83e93c4398] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:25.674857  345910 system_pods.go:89] "kindnet-txb2h" [ff8c4aab-19f6-4e7d-9f4f-e3e499a57017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:25.674870  345910 system_pods.go:89] "kube-apiserver-no-preload-328404" [229781bb-351d-4049-abb6-02f9d6bb3d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:25.674885  345910 system_pods.go:89] "kube-controller-manager-no-preload-328404" [a03a3720-eeef-44f8-8b3d-ccf98acf3f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:25.674904  345910 system_pods.go:89] "kube-proxy-tnpxj" [81c501a3-fe67-425e-b459-5d9e8783d67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:25.674916  345910 system_pods.go:89] "kube-scheduler-no-preload-328404" [50f29152-4dd3-4f93-ba1a-324538708448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:25.674927  345910 system_pods.go:89] "storage-provisioner" [3e9e0ecd-7bb1-456d-97d6-436ccd273c6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:25.674937  345910 system_pods.go:126] duration metric: took 2.858367ms to wait for k8s-apps to be running ...
	I1221 20:26:25.674946  345910 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:25.674994  345910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:25.692567  345910 system_svc.go:56] duration metric: took 17.613432ms WaitForService to wait for kubelet
	I1221 20:26:25.692625  345910 kubeadm.go:587] duration metric: took 2.843019767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:25.692650  345910 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:25.696171  345910 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:25.696196  345910 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:25.696214  345910 node_conditions.go:105] duration metric: took 3.549535ms to run NodePressure ...
	I1221 20:26:25.696258  345910 start.go:242] waiting for startup goroutines ...
	I1221 20:26:25.696273  345910 start.go:247] waiting for cluster config update ...
	I1221 20:26:25.696292  345910 start.go:256] writing updated cluster config ...
	I1221 20:26:25.696578  345910 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:25.700995  345910 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:25.705329  345910 pod_ready.go:83] waiting for pod "coredns-7d764666f9-wkztz" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:26:27.711550  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:30.211912  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:27.041802  339032 node_ready.go:57] node "default-k8s-diff-port-766361" has "Ready":"False" status (will retry)
	W1221 20:26:29.538369  339032 node_ready.go:57] node "default-k8s-diff-port-766361" has "Ready":"False" status (will retry)
	I1221 20:26:31.038840  339032 node_ready.go:49] node "default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:31.038875  339032 node_ready.go:38] duration metric: took 12.50377621s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:26:31.038892  339032 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:26:31.038958  339032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:26:31.054404  339032 api_server.go:72] duration metric: took 12.784988284s to wait for apiserver process to appear ...
	I1221 20:26:31.054443  339032 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:26:31.054466  339032 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:26:31.062787  339032 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1221 20:26:31.064052  339032 api_server.go:141] control plane version: v1.34.3
	I1221 20:26:31.064087  339032 api_server.go:131] duration metric: took 9.635216ms to wait for apiserver health ...
	I1221 20:26:31.064097  339032 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:31.068373  339032 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:31.068406  339032 system_pods.go:61] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.068414  339032 system_pods.go:61] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.068421  339032 system_pods.go:61] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.068428  339032 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.068433  339032 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.068438  339032 system_pods.go:61] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.068450  339032 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.068459  339032 system_pods.go:61] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:31.068470  339032 system_pods.go:74] duration metric: took 4.365658ms to wait for pod list to return data ...
	I1221 20:26:31.068481  339032 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:31.071303  339032 default_sa.go:45] found service account: "default"
	I1221 20:26:31.071323  339032 default_sa.go:55] duration metric: took 2.831663ms for default service account to be created ...
	I1221 20:26:31.071332  339032 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:31.074677  339032 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:31.074711  339032 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.074720  339032 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.074727  339032 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.074733  339032 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.074739  339032 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.074745  339032 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.074750  339032 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.074761  339032 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:31.074793  339032 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1221 20:26:31.357381  339032 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:31.357419  339032 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:31.357429  339032 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running
	I1221 20:26:31.357449  339032 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:26:31.357455  339032 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running
	I1221 20:26:31.357465  339032 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running
	I1221 20:26:31.357477  339032 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:26:31.357487  339032 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running
	I1221 20:26:31.357495  339032 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:26:31.357504  339032 system_pods.go:126] duration metric: took 286.165238ms to wait for k8s-apps to be running ...
	I1221 20:26:31.357517  339032 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:31.357569  339032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:31.374162  339032 system_svc.go:56] duration metric: took 16.636072ms WaitForService to wait for kubelet
	I1221 20:26:31.374199  339032 kubeadm.go:587] duration metric: took 13.104782839s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:31.374252  339032 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:31.377689  339032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:31.377718  339032 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:31.377806  339032 node_conditions.go:105] duration metric: took 3.541844ms to run NodePressure ...
	I1221 20:26:31.377821  339032 start.go:242] waiting for startup goroutines ...
	I1221 20:26:31.377832  339032 start.go:247] waiting for cluster config update ...
	I1221 20:26:31.377847  339032 start.go:256] writing updated cluster config ...
	I1221 20:26:31.378180  339032 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:31.382766  339032 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:31.386785  339032 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:28.506776  349045 out.go:252] * Restarting existing docker container for "embed-certs-413073" ...
	I1221 20:26:28.506853  349045 cli_runner.go:164] Run: docker start embed-certs-413073
	I1221 20:26:28.754220  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:28.772197  349045 kic.go:430] container "embed-certs-413073" state is running.
	I1221 20:26:28.772613  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:28.791483  349045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/config.json ...
	I1221 20:26:28.791662  349045 machine.go:94] provisionDockerMachine start ...
	I1221 20:26:28.791717  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:28.811016  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:28.811307  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:28.811325  349045 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:26:28.811830  349045 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41650->127.0.0.1:33124: read: connection reset by peer
	I1221 20:26:31.973490  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:26:31.973519  349045 ubuntu.go:182] provisioning hostname "embed-certs-413073"
	I1221 20:26:31.973592  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:31.998312  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:31.998627  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:31.998655  349045 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-413073 && echo "embed-certs-413073" | sudo tee /etc/hostname
	I1221 20:26:32.169162  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-413073
	
	I1221 20:26:32.169295  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.197522  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:32.197833  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:32.197860  349045 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-413073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-413073/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-413073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:26:32.356078  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:26:32.356105  349045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:26:32.356129  349045 ubuntu.go:190] setting up certificates
	I1221 20:26:32.356139  349045 provision.go:84] configureAuth start
	I1221 20:26:32.356205  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:32.380995  349045 provision.go:143] copyHostCerts
	I1221 20:26:32.381067  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:26:32.381088  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:26:32.381158  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:26:32.381336  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:26:32.381352  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:26:32.381399  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:26:32.381517  349045 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:26:32.381528  349045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:26:32.381563  349045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:26:32.381652  349045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.embed-certs-413073 san=[127.0.0.1 192.168.94.2 embed-certs-413073 localhost minikube]
	I1221 20:26:32.479184  349045 provision.go:177] copyRemoteCerts
	I1221 20:26:32.479284  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:26:32.479340  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.505304  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:32.615477  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:26:32.637386  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1221 20:26:32.657941  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 20:26:32.679246  349045 provision.go:87] duration metric: took 323.089087ms to configureAuth
	I1221 20:26:32.679276  349045 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:26:32.679495  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:32.679620  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:32.704097  349045 main.go:144] libmachine: Using SSH client type: native
	I1221 20:26:32.704422  349045 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1221 20:26:32.704452  349045 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:26:32.393030  339032 pod_ready.go:94] pod "coredns-66bc5c9577-bp67f" is "Ready"
	I1221 20:26:32.393060  339032 pod_ready.go:86] duration metric: took 1.006253441s for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.395886  339032 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.400375  339032 pod_ready.go:94] pod "etcd-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.400399  339032 pod_ready.go:86] duration metric: took 4.491012ms for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.403288  339032 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.408032  339032 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.408055  339032 pod_ready.go:86] duration metric: took 4.736601ms for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.410124  339032 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.590191  339032 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:32.590243  339032 pod_ready.go:86] duration metric: took 180.076227ms for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:32.790437  339032 pod_ready.go:83] waiting for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.190998  339032 pod_ready.go:94] pod "kube-proxy-w9lgb" is "Ready"
	I1221 20:26:33.191030  339032 pod_ready.go:86] duration metric: took 400.559576ms for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.390945  339032 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.790606  339032 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-766361" is "Ready"
	I1221 20:26:33.790642  339032 pod_ready.go:86] duration metric: took 399.665202ms for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:33.790658  339032 pod_ready.go:40] duration metric: took 2.40784924s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:33.840839  339032 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:26:33.865033  339032 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-766361" cluster and "default" namespace by default
	W1221 20:26:30.348013  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:32.348358  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:32.212243  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:34.711111  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:26:34.122259  349045 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:26:34.122300  349045 machine.go:97] duration metric: took 5.330623534s to provisionDockerMachine
	I1221 20:26:34.122318  349045 start.go:293] postStartSetup for "embed-certs-413073" (driver="docker")
	I1221 20:26:34.122332  349045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:26:34.122408  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:26:34.122462  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.145201  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.245112  349045 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:26:34.248686  349045 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:26:34.248719  349045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:26:34.248731  349045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:26:34.248796  349045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:26:34.248891  349045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:26:34.248979  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:26:34.257867  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:34.276308  349045 start.go:296] duration metric: took 153.975025ms for postStartSetup
	I1221 20:26:34.276373  349045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:26:34.276431  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.295030  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.390399  349045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:26:34.395009  349045 fix.go:56] duration metric: took 5.911162905s for fixHost
	I1221 20:26:34.395034  349045 start.go:83] releasing machines lock for "embed-certs-413073", held for 5.911210955s
	I1221 20:26:34.395103  349045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-413073
	I1221 20:26:34.415658  349045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:26:34.415718  349045 ssh_runner.go:195] Run: cat /version.json
	I1221 20:26:34.415753  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.415772  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:34.437353  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.438191  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:34.532205  349045 ssh_runner.go:195] Run: systemctl --version
	I1221 20:26:34.590519  349045 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:26:34.626292  349045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:26:34.631256  349045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:26:34.631323  349045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:26:34.640212  349045 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:26:34.640261  349045 start.go:496] detecting cgroup driver to use...
	I1221 20:26:34.640296  349045 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:26:34.640339  349045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:26:34.655152  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:26:34.666935  349045 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:26:34.666995  349045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:26:34.681162  349045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:26:34.694205  349045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:26:34.773836  349045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:26:34.866636  349045 docker.go:234] disabling docker service ...
	I1221 20:26:34.866704  349045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:26:34.883877  349045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:26:34.897764  349045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:26:34.992795  349045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:26:35.089519  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:26:35.101885  349045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:26:35.117012  349045 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:26:35.117071  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.125693  349045 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:26:35.125742  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.135514  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.144405  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.153105  349045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:26:35.161280  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.170948  349045 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.181393  349045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:26:35.190217  349045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:26:35.197559  349045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:26:35.204474  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:35.282887  349045 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:26:35.491553  349045 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:26:35.491642  349045 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:26:35.496405  349045 start.go:564] Will wait 60s for crictl version
	I1221 20:26:35.496470  349045 ssh_runner.go:195] Run: which crictl
	I1221 20:26:35.500988  349045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:26:35.525158  349045 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:26:35.525280  349045 ssh_runner.go:195] Run: crio --version
	I1221 20:26:35.553291  349045 ssh_runner.go:195] Run: crio --version
	I1221 20:26:35.582458  349045 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:26:35.583603  349045 cli_runner.go:164] Run: docker network inspect embed-certs-413073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:26:35.601409  349045 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1221 20:26:35.605668  349045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:35.616349  349045 kubeadm.go:884] updating cluster {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:26:35.616474  349045 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:26:35.616526  349045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:35.647904  349045 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:35.647924  349045 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:26:35.647969  349045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:26:35.672746  349045 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:26:35.672771  349045 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:26:35.672778  349045 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.3 crio true true} ...
	I1221 20:26:35.672870  349045 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-413073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:26:35.672941  349045 ssh_runner.go:195] Run: crio config
	I1221 20:26:35.721006  349045 cni.go:84] Creating CNI manager for ""
	I1221 20:26:35.721028  349045 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:26:35.721041  349045 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:26:35.721060  349045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-413073 NodeName:embed-certs-413073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:26:35.721172  349045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-413073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:26:35.721262  349045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:26:35.729396  349045 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:26:35.729469  349045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:26:35.737076  349045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1221 20:26:35.749458  349045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:26:35.762014  349045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1221 20:26:35.776095  349045 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:26:35.779907  349045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:26:35.790127  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:35.871764  349045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:35.894457  349045 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073 for IP: 192.168.94.2
	I1221 20:26:35.894479  349045 certs.go:195] generating shared ca certs ...
	I1221 20:26:35.894498  349045 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:35.894692  349045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:26:35.894757  349045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:26:35.894773  349045 certs.go:257] generating profile certs ...
	I1221 20:26:35.894903  349045 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/client.key
	I1221 20:26:35.894982  349045 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key.865f7206
	I1221 20:26:35.895039  349045 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key
	I1221 20:26:35.895195  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:26:35.895255  349045 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:26:35.895269  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:26:35.895316  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:26:35.895359  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:26:35.895394  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:26:35.895460  349045 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:26:35.896857  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:26:35.918148  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:26:35.937363  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:26:35.956791  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:26:35.980319  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1221 20:26:35.998307  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 20:26:36.016864  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:26:36.035412  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/embed-certs-413073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:26:36.052147  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:26:36.068514  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:26:36.085864  349045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:26:36.104067  349045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:26:36.116311  349045 ssh_runner.go:195] Run: openssl version
	I1221 20:26:36.122281  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.129549  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:26:36.137800  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.141357  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.141422  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:26:36.177095  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:26:36.184709  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.191985  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:26:36.199039  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.202890  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.202936  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:26:36.237305  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:26:36.244698  349045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.251690  349045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:26:36.258834  349045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.262601  349045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.262651  349045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:26:36.297116  349045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:26:36.304458  349045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:26:36.308039  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:26:36.343506  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:26:36.379330  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:26:36.419930  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:26:36.467926  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:26:36.517528  349045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:26:36.569807  349045 kubeadm.go:401] StartCluster: {Name:embed-certs-413073 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-413073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:26:36.569934  349045 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:26:36.570012  349045 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:26:36.603596  349045 cri.go:96] found id: "020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce"
	I1221 20:26:36.603620  349045 cri.go:96] found id: "9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676"
	I1221 20:26:36.603626  349045 cri.go:96] found id: "c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536"
	I1221 20:26:36.603631  349045 cri.go:96] found id: "d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f"
	I1221 20:26:36.603635  349045 cri.go:96] found id: ""
	I1221 20:26:36.603694  349045 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:26:36.615256  349045 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:26:36Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:26:36.615332  349045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:26:36.623063  349045 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:26:36.623081  349045 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:26:36.623168  349045 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:26:36.630509  349045 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:26:36.631520  349045 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-413073" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:36.632152  349045 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-413073" cluster setting kubeconfig missing "embed-certs-413073" context setting]
	I1221 20:26:36.633238  349045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.635239  349045 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:26:36.642696  349045 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1221 20:26:36.642724  349045 kubeadm.go:602] duration metric: took 19.637661ms to restartPrimaryControlPlane
	I1221 20:26:36.642733  349045 kubeadm.go:403] duration metric: took 72.941162ms to StartCluster
	I1221 20:26:36.642749  349045 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.642804  349045 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:26:36.644942  349045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:26:36.645178  349045 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:26:36.645266  349045 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:26:36.645373  349045 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-413073"
	I1221 20:26:36.645392  349045 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-413073"
	W1221 20:26:36.645404  349045 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:26:36.645407  349045 addons.go:70] Setting dashboard=true in profile "embed-certs-413073"
	I1221 20:26:36.645432  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.645434  349045 addons.go:239] Setting addon dashboard=true in "embed-certs-413073"
	I1221 20:26:36.645440  349045 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:26:36.645468  349045 addons.go:70] Setting default-storageclass=true in profile "embed-certs-413073"
	W1221 20:26:36.645444  349045 addons.go:248] addon dashboard should already be in state true
	I1221 20:26:36.645494  349045 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-413073"
	I1221 20:26:36.645510  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.645796  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.645906  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.645963  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.647026  349045 out.go:179] * Verifying Kubernetes components...
	I1221 20:26:36.648142  349045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:26:36.670830  349045 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1221 20:26:36.671909  349045 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:26:36.671982  349045 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:26:36.672921  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:26:36.672938  349045 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:26:36.672981  349045 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:36.672995  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.672999  349045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:26:36.673047  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.673322  349045 addons.go:239] Setting addon default-storageclass=true in "embed-certs-413073"
	W1221 20:26:36.673343  349045 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:26:36.673379  349045 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:26:36.673831  349045 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:26:36.713604  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.716336  349045 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:36.716359  349045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:26:36.716417  349045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:26:36.717556  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.740725  349045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:26:36.799504  349045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:26:36.813392  349045 node_ready.go:35] waiting up to 6m0s for node "embed-certs-413073" to be "Ready" ...
	I1221 20:26:36.827736  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:26:36.831307  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:26:36.831331  349045 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:26:36.847340  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:26:36.847361  349045 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:26:36.857774  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:26:36.864116  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:26:36.864135  349045 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:26:36.880513  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:26:36.880541  349045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:26:36.895508  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:26:36.895533  349045 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:26:36.909454  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:26:36.909478  349045 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:26:36.923439  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:26:36.923466  349045 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:26:36.936237  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:26:36.936258  349045 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:26:36.948470  349045 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:36.948487  349045 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:26:36.960580  349045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:26:38.055702  349045 node_ready.go:49] node "embed-certs-413073" is "Ready"
	I1221 20:26:38.055739  349045 node_ready.go:38] duration metric: took 1.242302482s for node "embed-certs-413073" to be "Ready" ...
	I1221 20:26:38.055756  349045 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:26:38.055807  349045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:26:38.565557  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.70771546s)
	I1221 20:26:38.566433  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.605814489s)
	I1221 20:26:38.566655  349045 api_server.go:72] duration metric: took 1.921448818s to wait for apiserver process to appear ...
	I1221 20:26:38.566678  349045 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:26:38.566680  349045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.738896864s)
	I1221 20:26:38.566700  349045 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:26:38.571884  349045 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-413073 addons enable metrics-server
	
	I1221 20:26:38.572921  349045 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:26:38.573011  349045 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:26:38.580646  349045 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1221 20:26:34.847391  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:36.849413  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:39.348748  341446 pod_ready.go:104] pod "coredns-5dd5756b68-v285b" is not "Ready", error: <nil>
	W1221 20:26:36.714889  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:39.210974  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:26:38.581622  349045 addons.go:530] duration metric: took 1.936370568s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:26:39.066797  349045 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:26:39.071340  349045 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:26:39.071363  349045 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:26:39.567578  349045 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1221 20:26:39.572904  349045 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1221 20:26:39.574079  349045 api_server.go:141] control plane version: v1.34.3
	I1221 20:26:39.574110  349045 api_server.go:131] duration metric: took 1.007425087s to wait for apiserver health ...
	I1221 20:26:39.574121  349045 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:26:39.577965  349045 system_pods.go:59] 8 kube-system pods found
	I1221 20:26:39.578010  349045 system_pods.go:61] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:39.578018  349045 system_pods.go:61] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:39.578025  349045 system_pods.go:61] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:39.578034  349045 system_pods.go:61] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:39.578041  349045 system_pods.go:61] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:39.578049  349045 system_pods.go:61] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:39.578054  349045 system_pods.go:61] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:39.578073  349045 system_pods.go:61] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:39.578084  349045 system_pods.go:74] duration metric: took 3.956948ms to wait for pod list to return data ...
	I1221 20:26:39.578093  349045 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:26:39.580020  349045 default_sa.go:45] found service account: "default"
	I1221 20:26:39.580039  349045 default_sa.go:55] duration metric: took 1.940442ms for default service account to be created ...
	I1221 20:26:39.580046  349045 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:26:39.582151  349045 system_pods.go:86] 8 kube-system pods found
	I1221 20:26:39.582185  349045 system_pods.go:89] "coredns-66bc5c9577-lvwlf" [8a8e12ed-d550-467e-b4d4-bdf8e0ced6f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:26:39.582196  349045 system_pods.go:89] "etcd-embed-certs-413073" [58c9467d-c66a-4a4c-8213-d3a1c68a3bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:26:39.582215  349045 system_pods.go:89] "kindnet-qnfsx" [fe58c6e7-54ff-4b21-9574-3529a25f66d1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:26:39.582239  349045 system_pods.go:89] "kube-apiserver-embed-certs-413073" [a2669164-95fb-4ec3-9291-20561cce2302] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:26:39.582250  349045 system_pods.go:89] "kube-controller-manager-embed-certs-413073" [2f0377f5-2c3c-48b3-9915-050832abf582] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:26:39.582261  349045 system_pods.go:89] "kube-proxy-qvdzm" [654663b3-137f-4beb-8dac-3d7db7fba22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:26:39.582266  349045 system_pods.go:89] "kube-scheduler-embed-certs-413073" [e56c2a0a-a4c9-47d4-b84c-a9634e6ac3eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:26:39.582275  349045 system_pods.go:89] "storage-provisioner" [a901db92-ff3c-4b7d-b391-9265924cb998] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1221 20:26:39.582281  349045 system_pods.go:126] duration metric: took 2.230431ms to wait for k8s-apps to be running ...
	I1221 20:26:39.582290  349045 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:26:39.582327  349045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:26:39.595168  349045 system_svc.go:56] duration metric: took 12.869871ms WaitForService to wait for kubelet
	I1221 20:26:39.595204  349045 kubeadm.go:587] duration metric: took 2.949997064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:26:39.595275  349045 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:26:39.597579  349045 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:26:39.597599  349045 node_conditions.go:123] node cpu capacity is 8
	I1221 20:26:39.597612  349045 node_conditions.go:105] duration metric: took 2.327211ms to run NodePressure ...
	I1221 20:26:39.597621  349045 start.go:242] waiting for startup goroutines ...
	I1221 20:26:39.597629  349045 start.go:247] waiting for cluster config update ...
	I1221 20:26:39.597641  349045 start.go:256] writing updated cluster config ...
	I1221 20:26:39.597849  349045 ssh_runner.go:195] Run: rm -f paused
	I1221 20:26:39.601386  349045 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:39.604097  349045 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lvwlf" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:26:41.609005  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:26:41.846863  341446 pod_ready.go:94] pod "coredns-5dd5756b68-v285b" is "Ready"
	I1221 20:26:41.846892  341446 pod_ready.go:86] duration metric: took 32.005636894s for pod "coredns-5dd5756b68-v285b" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.849729  341446 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.854590  341446 pod_ready.go:94] pod "etcd-old-k8s-version-699289" is "Ready"
	I1221 20:26:41.854623  341446 pod_ready.go:86] duration metric: took 4.871295ms for pod "etcd-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.857354  341446 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.860875  341446 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-699289" is "Ready"
	I1221 20:26:41.860893  341446 pod_ready.go:86] duration metric: took 3.516703ms for pod "kube-apiserver-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:41.863111  341446 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.046017  341446 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-699289" is "Ready"
	I1221 20:26:42.046051  341446 pod_ready.go:86] duration metric: took 182.920409ms for pod "kube-controller-manager-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.246045  341446 pod_ready.go:83] waiting for pod "kube-proxy-hsngj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.645985  341446 pod_ready.go:94] pod "kube-proxy-hsngj" is "Ready"
	I1221 20:26:42.646015  341446 pod_ready.go:86] duration metric: took 399.94762ms for pod "kube-proxy-hsngj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:42.847481  341446 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:43.245686  341446 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-699289" is "Ready"
	I1221 20:26:43.245717  341446 pod_ready.go:86] duration metric: took 398.204753ms for pod "kube-scheduler-old-k8s-version-699289" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:26:43.245732  341446 pod_ready.go:40] duration metric: took 33.412666685s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:26:43.301870  341446 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1221 20:26:43.303149  341446 out.go:203] 
	W1221 20:26:43.304309  341446 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1221 20:26:43.305406  341446 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1221 20:26:43.306573  341446 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-699289" cluster and "default" namespace by default
	W1221 20:26:41.710083  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:43.712306  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:43.610081  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:45.610256  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:48.108857  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:46.212011  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:48.710356  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:50.109521  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:52.609030  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:50.711369  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:53.210589  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:26:54.609083  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:26:56.609209  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 21 20:26:27 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:27.456824198Z" level=info msg="Created container 26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm/kubernetes-dashboard" id=8d51b8e8-9db0-46fc-bc24-daf3a1325c3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:27 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:27.457338604Z" level=info msg="Starting container: 26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6" id=bd61fd57-a26c-4f83-a962-0ba8c00b6565 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:27 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:27.458895069Z" level=info msg="Started container" PID=1732 containerID=26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm/kubernetes-dashboard id=bd61fd57-a26c-4f83-a962-0ba8c00b6565 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca1864c1f5f2574630d417c22c6cbac07b8889ae04398d35572259dd7125a8fe
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.774836101Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0d5ed1e5-73ad-45dd-a982-97de5a36d6ff name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.775765549Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8639f7f8-2d93-4158-a192-939f0175b1e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.777067736Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8799b430-d075-45ec-aaa0-1dfe10033a80 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.777277023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782482848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782661073Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e74d62c2b7891d6334262dbef4d57e4309086136d2098ca7b448b6d58daa3cc9/merged/etc/passwd: no such file or directory"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782695281Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e74d62c2b7891d6334262dbef4d57e4309086136d2098ca7b448b6d58daa3cc9/merged/etc/group: no such file or directory"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.782977065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.806140344Z" level=info msg="Created container fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1: kube-system/storage-provisioner/storage-provisioner" id=8799b430-d075-45ec-aaa0-1dfe10033a80 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.80677734Z" level=info msg="Starting container: fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1" id=8cde22e9-2742-4885-807d-1bad20895285 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:39 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:39.808616351Z" level=info msg="Started container" PID=1755 containerID=fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1 description=kube-system/storage-provisioner/storage-provisioner id=8cde22e9-2742-4885-807d-1bad20895285 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b1aaa26f89056b45a62745c4d8398ec57e6789f2ff259b9397541d42052ffa5
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.666774352Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e00ee0e6-86fc-4498-bf04-a4e41d395711 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.667900026Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65ef8591-0419-41e8-bf14-7552adbb4816 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.669126745Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper" id=57c2091c-2ad4-42f5-9684-575595a77fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.669339705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.677541673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.678355177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.710814948Z" level=info msg="Created container 65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper" id=57c2091c-2ad4-42f5-9684-575595a77fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.712138512Z" level=info msg="Starting container: 65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef" id=7a8a44c5-edfe-4967-9f03-78163dcf036e name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.714832219Z" level=info msg="Started container" PID=1773 containerID=65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper id=7a8a44c5-edfe-4967-9f03-78163dcf036e name=/runtime.v1.RuntimeService/StartContainer sandboxID=49f748e9aa7147180a0d4b212198c27f4a6aea56866cc9b661e4a8be2d5204fc
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.786773308Z" level=info msg="Removing container: 626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b" id=10184fdf-ecdc-4ecd-9b4f-f6addd3d9b3b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:42 old-k8s-version-699289 crio[568]: time="2025-12-21T20:26:42.799320676Z" level=info msg="Removed container 626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972/dashboard-metrics-scraper" id=10184fdf-ecdc-4ecd-9b4f-f6addd3d9b3b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	65bf231f6b72e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   49f748e9aa714       dashboard-metrics-scraper-5f989dc9cf-vj972       kubernetes-dashboard
	fd913503e4ec5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   2b1aaa26f8905       storage-provisioner                              kube-system
	26d63f5746b09       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   ca1864c1f5f25       kubernetes-dashboard-8694d4445c-72bcm            kubernetes-dashboard
	1460cc6bb57e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   723f2aac14aa8       coredns-5dd5756b68-v285b                         kube-system
	40c6ee8927127       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   814f2a9392fc0       busybox                                          default
	d6f65e64c24a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   2b1aaa26f8905       storage-provisioner                              kube-system
	a0d586c455cc3       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   fa9ac78100246       kindnet-g5mb8                                    kube-system
	33c6adad84864       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   2bb99b42d1c06       kube-proxy-hsngj                                 kube-system
	d1fb79aa0d924       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   a5a90cf75a190       kube-controller-manager-old-k8s-version-699289   kube-system
	f568d82d77c18       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   302a8cab485fe       etcd-old-k8s-version-699289                      kube-system
	5fc8d02fce783       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   30b1de32413a9       kube-apiserver-old-k8s-version-699289            kube-system
	64bce6865fb1a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   af7d59ca88611       kube-scheduler-old-k8s-version-699289            kube-system
	
	
	==> coredns [1460cc6bb57e2081694d9423affc1178017a03dff842225c30fab505d7d2a95b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43636 - 61565 "HINFO IN 7723653539803018527.2493573840787226081. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.474792149s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-699289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-699289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=old-k8s-version-699289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-699289
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:26:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:24:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:26:39 +0000   Sun, 21 Dec 2025 20:25:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-699289
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                5608b3c9-c686-468f-89f8-92ad8cb9ae20
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-v285b                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-699289                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-g5mb8                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-699289             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-699289    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-hsngj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-699289             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vj972        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-72bcm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-699289 event: Registered Node old-k8s-version-699289 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-699289 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x9 over 55s)    kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-699289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x7 over 55s)    kubelet          Node old-k8s-version-699289 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-699289 event: Registered Node old-k8s-version-699289 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [f568d82d77c18300e44677d66b6b0bc4c5ba3b7d94a1b4f5b47db27571852dc4] <==
	{"level":"info","ts":"2025-12-21T20:26:06.221Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-21T20:26:06.221086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-21T20:26:06.221172Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-21T20:26:06.221291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:26:06.221335Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-21T20:26:06.22359Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-21T20:26:06.223747Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:26:06.224199Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:26:06.223846Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-21T20:26:06.22387Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-21T20:26:07.211703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:07.211751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:07.211767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:07.211779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.211785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.211793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.2118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:07.214628Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-699289 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-21T20:26:07.214664Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:07.214675Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:07.21484Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:07.21487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:07.215688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-21T20:26:07.215733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:26:27.855343Z","caller":"traceutil/trace.go:171","msg":"trace[147729461] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"104.530903ms","start":"2025-12-21T20:26:27.750789Z","end":"2025-12-21T20:26:27.85532Z","steps":["trace[147729461] 'process raft request'  (duration: 104.253761ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:27:00 up  1:09,  0 user,  load average: 4.40, 3.89, 2.76
	Linux old-k8s-version-699289 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0d586c455cc3e950fec3abf57e8834f990d21f159f890449ee01006af8b5ea3] <==
	I1221 20:26:09.358948       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:26:09.359219       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1221 20:26:09.359434       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:26:09.359461       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:26:09.359483       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:26:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:26:09.593493       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:26:09.593616       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:26:09.657635       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:26:09.657885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:26:09.957888       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:26:09.957920       1 metrics.go:72] Registering metrics
	I1221 20:26:09.958172       1 controller.go:711] "Syncing nftables rules"
	I1221 20:26:19.593425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:19.593508       1 main.go:301] handling current node
	I1221 20:26:29.594067       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:29.594108       1 main.go:301] handling current node
	I1221 20:26:39.593757       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:39.593790       1 main.go:301] handling current node
	I1221 20:26:49.598472       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:49.598505       1 main.go:301] handling current node
	I1221 20:26:59.600298       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1221 20:26:59.600400       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5fc8d02fce78360a2559c2f88b3c8e6e49a518cd94d46fcb3f5554e34a4b6559] <==
	I1221 20:26:08.337564       1 autoregister_controller.go:141] Starting autoregister controller
	I1221 20:26:08.337570       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 20:26:08.337577       1 cache.go:39] Caches are synced for autoregister controller
	I1221 20:26:08.337617       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1221 20:26:08.337664       1 shared_informer.go:318] Caches are synced for configmaps
	I1221 20:26:08.337700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1221 20:26:08.338062       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:08.343033       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:26:08.356277       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1221 20:26:09.245589       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:26:09.593945       1 controller.go:624] quota admission added evaluator for: namespaces
	I1221 20:26:09.628335       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1221 20:26:09.653131       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:26:09.662976       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:26:09.671728       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1221 20:26:09.716662       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.126.245"}
	I1221 20:26:09.736662       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.136.92"}
	E1221 20:26:18.337757       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	I1221 20:26:21.291943       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:26:21.300914       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1221 20:26:21.337122       1 controller.go:624] quota admission added evaluator for: endpoints
	E1221 20:26:28.338252       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:38.338805       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:48.339919       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1221 20:26:58.341025       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [d1fb79aa0d924fff93f096054d4a46f8a8baf20e2df92302469d3c1b72a950b5] <==
	I1221 20:26:21.376858       1 taint_manager.go:211] "Sending events to api server"
	I1221 20:26:21.376854       1 event.go:307] "Event occurred" object="old-k8s-version-699289" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-699289 event: Registered Node old-k8s-version-699289 in Controller"
	I1221 20:26:21.376875       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1221 20:26:21.415522       1 shared_informer.go:318] Caches are synced for disruption
	I1221 20:26:21.417773       1 shared_informer.go:318] Caches are synced for stateful set
	I1221 20:26:21.424019       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1221 20:26:21.433473       1 shared_informer.go:318] Caches are synced for resource quota
	I1221 20:26:21.448169       1 shared_informer.go:318] Caches are synced for resource quota
	I1221 20:26:21.452696       1 shared_informer.go:318] Caches are synced for PVC protection
	I1221 20:26:21.476149       1 shared_informer.go:318] Caches are synced for expand
	I1221 20:26:21.481506       1 shared_informer.go:318] Caches are synced for persistent volume
	I1221 20:26:21.493967       1 shared_informer.go:318] Caches are synced for ephemeral
	I1221 20:26:21.502615       1 shared_informer.go:318] Caches are synced for attach detach
	I1221 20:26:21.863535       1 shared_informer.go:318] Caches are synced for garbage collector
	I1221 20:26:21.875738       1 shared_informer.go:318] Caches are synced for garbage collector
	I1221 20:26:21.875774       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1221 20:26:24.740768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.34µs"
	I1221 20:26:25.747747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.88µs"
	I1221 20:26:26.768488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="141.298µs"
	I1221 20:26:27.876221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.175413ms"
	I1221 20:26:27.877099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.986µs"
	I1221 20:26:41.730380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.633704ms"
	I1221 20:26:41.730513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.346µs"
	I1221 20:26:42.801460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.494µs"
	I1221 20:26:51.648831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.63µs"
	
	
	==> kube-proxy [33c6adad84864cf2665448db090a10c1199353f3e0dc0eea36e033cd09d820ea] <==
	I1221 20:26:09.213044       1 server_others.go:69] "Using iptables proxy"
	I1221 20:26:09.227980       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1221 20:26:09.264559       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:26:09.268664       1 server_others.go:152] "Using iptables Proxier"
	I1221 20:26:09.268746       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1221 20:26:09.268774       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1221 20:26:09.268833       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1221 20:26:09.269131       1 server.go:846] "Version info" version="v1.28.0"
	I1221 20:26:09.269180       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:09.269996       1 config.go:188] "Starting service config controller"
	I1221 20:26:09.270136       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1221 20:26:09.270194       1 config.go:315] "Starting node config controller"
	I1221 20:26:09.270218       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1221 20:26:09.271393       1 config.go:97] "Starting endpoint slice config controller"
	I1221 20:26:09.271430       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1221 20:26:09.370698       1 shared_informer.go:318] Caches are synced for node config
	I1221 20:26:09.370810       1 shared_informer.go:318] Caches are synced for service config
	I1221 20:26:09.375997       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [64bce6865fb1a19663efbee434032c3951a1e1d68bb578e204142222a2c6880d] <==
	I1221 20:26:06.870515       1 serving.go:348] Generated self-signed cert in-memory
	W1221 20:26:08.272311       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:26:08.272343       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:26:08.272356       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:26:08.272366       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:26:08.307856       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1221 20:26:08.308019       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:08.309834       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:08.309920       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1221 20:26:08.311710       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1221 20:26:08.311748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1221 20:26:08.410289       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.336873     732 topology_manager.go:215] "Topology Admit Handler" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-vj972"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452631     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d5qd\" (UniqueName: \"kubernetes.io/projected/432bee5e-70b1-42af-8b1c-e6f832fcc048-kube-api-access-4d5qd\") pod \"kubernetes-dashboard-8694d4445c-72bcm\" (UID: \"432bee5e-70b1-42af-8b1c-e6f832fcc048\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452688     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/432bee5e-70b1-42af-8b1c-e6f832fcc048-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-72bcm\" (UID: \"432bee5e-70b1-42af-8b1c-e6f832fcc048\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452724     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/06e27cf5-d3c0-4f8b-98eb-01f030181bd6-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vj972\" (UID: \"06e27cf5-d3c0-4f8b-98eb-01f030181bd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972"
	Dec 21 20:26:21 old-k8s-version-699289 kubelet[732]: I1221 20:26:21.452877     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669g7\" (UniqueName: \"kubernetes.io/projected/06e27cf5-d3c0-4f8b-98eb-01f030181bd6-kube-api-access-669g7\") pod \"dashboard-metrics-scraper-5f989dc9cf-vj972\" (UID: \"06e27cf5-d3c0-4f8b-98eb-01f030181bd6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972"
	Dec 21 20:26:24 old-k8s-version-699289 kubelet[732]: I1221 20:26:24.730410     732 scope.go:117] "RemoveContainer" containerID="15ea0c4c73c159ac337454513c9c54562de164d73f528042020447341e937d3f"
	Dec 21 20:26:25 old-k8s-version-699289 kubelet[732]: I1221 20:26:25.734742     732 scope.go:117] "RemoveContainer" containerID="15ea0c4c73c159ac337454513c9c54562de164d73f528042020447341e937d3f"
	Dec 21 20:26:25 old-k8s-version-699289 kubelet[732]: I1221 20:26:25.734939     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:25 old-k8s-version-699289 kubelet[732]: E1221 20:26:25.735330     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:26 old-k8s-version-699289 kubelet[732]: I1221 20:26:26.739045     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:26 old-k8s-version-699289 kubelet[732]: E1221 20:26:26.739510     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:31 old-k8s-version-699289 kubelet[732]: I1221 20:26:31.639073     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:31 old-k8s-version-699289 kubelet[732]: E1221 20:26:31.639382     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:39 old-k8s-version-699289 kubelet[732]: I1221 20:26:39.774411     732 scope.go:117] "RemoveContainer" containerID="d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc"
	Dec 21 20:26:39 old-k8s-version-699289 kubelet[732]: I1221 20:26:39.786244     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72bcm" podStartSLOduration=13.02867781 podCreationTimestamp="2025-12-21 20:26:21 +0000 UTC" firstStartedPulling="2025-12-21 20:26:21.663127711 +0000 UTC m=+16.087679725" lastFinishedPulling="2025-12-21 20:26:27.420618116 +0000 UTC m=+21.845170134" observedRunningTime="2025-12-21 20:26:27.857352383 +0000 UTC m=+22.281904417" watchObservedRunningTime="2025-12-21 20:26:39.786168219 +0000 UTC m=+34.210720250"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: I1221 20:26:42.665923     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: I1221 20:26:42.785458     732 scope.go:117] "RemoveContainer" containerID="626b8c0960565e6952b42011e7d528df96d8f830405377d0f5bd7c905952865b"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: I1221 20:26:42.785805     732 scope.go:117] "RemoveContainer" containerID="65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	Dec 21 20:26:42 old-k8s-version-699289 kubelet[732]: E1221 20:26:42.786157     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:51 old-k8s-version-699289 kubelet[732]: I1221 20:26:51.639629     732 scope.go:117] "RemoveContainer" containerID="65bf231f6b72ec6bb86fbc861eaec25a5f2644c2ea4e3ae48674ca9a4eaea2ef"
	Dec 21 20:26:51 old-k8s-version-699289 kubelet[732]: E1221 20:26:51.639956     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vj972_kubernetes-dashboard(06e27cf5-d3c0-4f8b-98eb-01f030181bd6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vj972" podUID="06e27cf5-d3c0-4f8b-98eb-01f030181bd6"
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:26:55 old-k8s-version-699289 systemd[1]: kubelet.service: Consumed 1.425s CPU time.
	
	
	==> kubernetes-dashboard [26d63f5746b0986cf668e5554f0d0b5f45d8c6f7f038ac6af8176c54019918e6] <==
	2025/12/21 20:26:27 Starting overwatch
	2025/12/21 20:26:27 Using namespace: kubernetes-dashboard
	2025/12/21 20:26:27 Using in-cluster config to connect to apiserver
	2025/12/21 20:26:27 Using secret token for csrf signing
	2025/12/21 20:26:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:26:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:26:27 Successful initial request to the apiserver, version: v1.28.0
	2025/12/21 20:26:27 Generating JWE encryption key
	2025/12/21 20:26:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:26:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:26:27 Initializing JWE encryption key from synchronized object
	2025/12/21 20:26:27 Creating in-cluster Sidecar client
	2025/12/21 20:26:27 Serving insecurely on HTTP port: 9090
	2025/12/21 20:26:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [d6f65e64c24a32dbccff7a492849afe0f8b397f3e8b8bfafdc51ac6af69c2afc] <==
	I1221 20:26:09.161606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:26:39.168393       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd913503e4ec5aa2ad9cb28d1d7a17c80d49e3117ba3a245383b44ca8b45aeb1] <==
	I1221 20:26:39.820936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:26:39.828713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:26:39.828756       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1221 20:26:57.224532       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:26:57.224666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-699289_2d7fc954-4fb1-4b88-84cf-28a19fe87dff!
	I1221 20:26:57.224665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b7387ac-0eac-492c-9220-7a6071dd4756", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-699289_2d7fc954-4fb1-4b88-84cf-28a19fe87dff became leader
	I1221 20:26:57.324857       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-699289_2d7fc954-4fb1-4b88-84cf-28a19fe87dff!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-699289 -n old-k8s-version-699289
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-699289 -n old-k8s-version-699289: exit status 2 (345.938515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-699289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-328404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-328404 --alsologtostderr -v=1: exit status 80 (2.574149963s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-328404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:27:18.954887  360246 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:18.955183  360246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:18.955202  360246 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:18.955208  360246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:18.955483  360246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:18.955759  360246 out.go:368] Setting JSON to false
	I1221 20:27:18.955784  360246 mustload.go:66] Loading cluster: no-preload-328404
	I1221 20:27:18.956364  360246 config.go:182] Loaded profile config "no-preload-328404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:18.956877  360246 cli_runner.go:164] Run: docker container inspect no-preload-328404 --format={{.State.Status}}
	I1221 20:27:18.981290  360246 host.go:66] Checking if "no-preload-328404" exists ...
	I1221 20:27:18.982281  360246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:19.059412  360246 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-21 20:27:19.045940625 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:19.060400  360246 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-328404 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotific
ation:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1221 20:27:19.063835  360246 out.go:179] * Pausing node no-preload-328404 ... 
	I1221 20:27:19.065125  360246 host.go:66] Checking if "no-preload-328404" exists ...
	I1221 20:27:19.065556  360246 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:19.065617  360246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328404
	I1221 20:27:19.093671  360246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/no-preload-328404/id_rsa Username:docker}
	I1221 20:27:19.201035  360246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:19.230706  360246 pause.go:52] kubelet running: true
	I1221 20:27:19.230772  360246 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:19.465704  360246 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:19.465828  360246 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:19.546561  360246 cri.go:96] found id: "c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f"
	I1221 20:27:19.546590  360246 cri.go:96] found id: "a084a6826d154a385bde8864d163a3902fe32cf3e04525a973b1d6149ec59316"
	I1221 20:27:19.546597  360246 cri.go:96] found id: "fe09ae4da8b24cd8e37c5e7ad994eef35649b944e8c085a4bbe2da7544aa431c"
	I1221 20:27:19.546602  360246 cri.go:96] found id: "f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8"
	I1221 20:27:19.546605  360246 cri.go:96] found id: "3595d41486618c410928433b6dcd88e3aa2dbd3baaf61cacd454477205ba2b3b"
	I1221 20:27:19.546608  360246 cri.go:96] found id: "bcac2e4233e078a1060d7687fd886835bcd161ef64c6969c34d2fca692733dca"
	I1221 20:27:19.546613  360246 cri.go:96] found id: "0046d150fd03984c5a267cbb1a42d7e283f30f63ee5bd302b5ebad1dce9150cf"
	I1221 20:27:19.546618  360246 cri.go:96] found id: "98be72f58d13404328401992ab2e7394515b18e5e27627b5c20db8e2982872e6"
	I1221 20:27:19.546624  360246 cri.go:96] found id: "d787f2902ce772055519660b7118e43b95c26d99a74f299380f021e62851e5d2"
	I1221 20:27:19.546637  360246 cri.go:96] found id: "35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	I1221 20:27:19.546649  360246 cri.go:96] found id: "bbbb335edc1a37bba1da0a6728be1871809e0281aea068022ebe44b162ab9011"
	I1221 20:27:19.546654  360246 cri.go:96] found id: ""
	I1221 20:27:19.546700  360246 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:19.559581  360246 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:19Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:19.805011  360246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:19.822746  360246 pause.go:52] kubelet running: false
	I1221 20:27:19.822828  360246 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:20.006462  360246 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:20.006563  360246 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:20.077368  360246 cri.go:96] found id: "c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f"
	I1221 20:27:20.077389  360246 cri.go:96] found id: "a084a6826d154a385bde8864d163a3902fe32cf3e04525a973b1d6149ec59316"
	I1221 20:27:20.077393  360246 cri.go:96] found id: "fe09ae4da8b24cd8e37c5e7ad994eef35649b944e8c085a4bbe2da7544aa431c"
	I1221 20:27:20.077397  360246 cri.go:96] found id: "f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8"
	I1221 20:27:20.077399  360246 cri.go:96] found id: "3595d41486618c410928433b6dcd88e3aa2dbd3baaf61cacd454477205ba2b3b"
	I1221 20:27:20.077403  360246 cri.go:96] found id: "bcac2e4233e078a1060d7687fd886835bcd161ef64c6969c34d2fca692733dca"
	I1221 20:27:20.077407  360246 cri.go:96] found id: "0046d150fd03984c5a267cbb1a42d7e283f30f63ee5bd302b5ebad1dce9150cf"
	I1221 20:27:20.077412  360246 cri.go:96] found id: "98be72f58d13404328401992ab2e7394515b18e5e27627b5c20db8e2982872e6"
	I1221 20:27:20.077417  360246 cri.go:96] found id: "d787f2902ce772055519660b7118e43b95c26d99a74f299380f021e62851e5d2"
	I1221 20:27:20.077426  360246 cri.go:96] found id: "35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	I1221 20:27:20.077438  360246 cri.go:96] found id: "bbbb335edc1a37bba1da0a6728be1871809e0281aea068022ebe44b162ab9011"
	I1221 20:27:20.077446  360246 cri.go:96] found id: ""
	I1221 20:27:20.077486  360246 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:20.436990  360246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:20.453203  360246 pause.go:52] kubelet running: false
	I1221 20:27:20.453354  360246 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:20.655257  360246 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:20.655345  360246 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:20.727664  360246 cri.go:96] found id: "c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f"
	I1221 20:27:20.727687  360246 cri.go:96] found id: "a084a6826d154a385bde8864d163a3902fe32cf3e04525a973b1d6149ec59316"
	I1221 20:27:20.727691  360246 cri.go:96] found id: "fe09ae4da8b24cd8e37c5e7ad994eef35649b944e8c085a4bbe2da7544aa431c"
	I1221 20:27:20.727694  360246 cri.go:96] found id: "f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8"
	I1221 20:27:20.727697  360246 cri.go:96] found id: "3595d41486618c410928433b6dcd88e3aa2dbd3baaf61cacd454477205ba2b3b"
	I1221 20:27:20.727700  360246 cri.go:96] found id: "bcac2e4233e078a1060d7687fd886835bcd161ef64c6969c34d2fca692733dca"
	I1221 20:27:20.727703  360246 cri.go:96] found id: "0046d150fd03984c5a267cbb1a42d7e283f30f63ee5bd302b5ebad1dce9150cf"
	I1221 20:27:20.727717  360246 cri.go:96] found id: "98be72f58d13404328401992ab2e7394515b18e5e27627b5c20db8e2982872e6"
	I1221 20:27:20.727720  360246 cri.go:96] found id: "d787f2902ce772055519660b7118e43b95c26d99a74f299380f021e62851e5d2"
	I1221 20:27:20.727726  360246 cri.go:96] found id: "35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	I1221 20:27:20.727729  360246 cri.go:96] found id: "bbbb335edc1a37bba1da0a6728be1871809e0281aea068022ebe44b162ab9011"
	I1221 20:27:20.727734  360246 cri.go:96] found id: ""
	I1221 20:27:20.727783  360246 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:21.071426  360246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:21.091509  360246 pause.go:52] kubelet running: false
	I1221 20:27:21.091620  360246 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:21.326408  360246 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:21.326497  360246 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:21.419505  360246 cri.go:96] found id: "c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f"
	I1221 20:27:21.419529  360246 cri.go:96] found id: "a084a6826d154a385bde8864d163a3902fe32cf3e04525a973b1d6149ec59316"
	I1221 20:27:21.419535  360246 cri.go:96] found id: "fe09ae4da8b24cd8e37c5e7ad994eef35649b944e8c085a4bbe2da7544aa431c"
	I1221 20:27:21.419540  360246 cri.go:96] found id: "f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8"
	I1221 20:27:21.419545  360246 cri.go:96] found id: "3595d41486618c410928433b6dcd88e3aa2dbd3baaf61cacd454477205ba2b3b"
	I1221 20:27:21.419549  360246 cri.go:96] found id: "bcac2e4233e078a1060d7687fd886835bcd161ef64c6969c34d2fca692733dca"
	I1221 20:27:21.419554  360246 cri.go:96] found id: "0046d150fd03984c5a267cbb1a42d7e283f30f63ee5bd302b5ebad1dce9150cf"
	I1221 20:27:21.419558  360246 cri.go:96] found id: "98be72f58d13404328401992ab2e7394515b18e5e27627b5c20db8e2982872e6"
	I1221 20:27:21.419563  360246 cri.go:96] found id: "d787f2902ce772055519660b7118e43b95c26d99a74f299380f021e62851e5d2"
	I1221 20:27:21.419580  360246 cri.go:96] found id: "35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	I1221 20:27:21.419586  360246 cri.go:96] found id: "bbbb335edc1a37bba1da0a6728be1871809e0281aea068022ebe44b162ab9011"
	I1221 20:27:21.419590  360246 cri.go:96] found id: ""
	I1221 20:27:21.419650  360246 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:21.437103  360246 out.go:203] 
	W1221 20:27:21.438370  360246 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 20:27:21.438390  360246 out.go:285] * 
	* 
	W1221 20:27:21.444838  360246 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 20:27:21.449325  360246 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-328404 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328404
helpers_test.go:244: (dbg) docker inspect no-preload-328404:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c",
	        "Created": "2025-12-21T20:24:59.700822041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:26:15.483158893Z",
	            "FinishedAt": "2025-12-21T20:26:14.568925545Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/hosts",
	        "LogPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c-json.log",
	        "Name": "/no-preload-328404",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328404:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-328404",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c",
	                "LowerDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328404",
	                "Source": "/var/lib/docker/volumes/no-preload-328404/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328404",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328404",
	                "name.minikube.sigs.k8s.io": "no-preload-328404",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cfcc5c15226dda23af07b5a10b10cf21180f51c42f47b1650e69d3ce1c72b866",
	            "SandboxKey": "/var/run/docker/netns/cfcc5c15226d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-328404": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3825326ac2cef213f4d7f258fd319688605c412ad1609130b5a218375fcefc22",
	                    "EndpointID": "23432b4aad4f93e78932ec14303a64217464897893f46c22c3f8e7739b4b0db7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:ed:d4:29:81:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328404",
	                        "15210117610b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404: exit status 2 (392.948548ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328404 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-328404 logs -n 25: (2.076530322s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ -p bridge-149976 sudo crio config                                                                                                                                                                                                                  │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p bridge-149976                                                                                                                                                                                                                                   │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p disable-driver-mounts-903813                                                                                                                                                                                                                    │ disable-driver-mounts-903813 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p no-preload-328404 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-766361 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ old-k8s-version-699289 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:04.161028  356149 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:04.161303  356149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:04.161311  356149 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:04.161315  356149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:04.161505  356149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:04.161969  356149 out.go:368] Setting JSON to false
	I1221 20:27:04.163121  356149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4173,"bootTime":1766344651,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:04.163191  356149 start.go:143] virtualization: kvm guest
	I1221 20:27:04.165113  356149 out.go:179] * [newest-cni-734511] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:04.166326  356149 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:04.166322  356149 notify.go:221] Checking for updates...
	I1221 20:27:04.168489  356149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:04.169743  356149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:04.170878  356149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:04.171920  356149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:04.172960  356149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:04.174444  356149 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:04.174550  356149 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:04.174656  356149 config.go:182] Loaded profile config "no-preload-328404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:04.174752  356149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:04.199923  356149 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:04.200099  356149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:04.255148  356149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:04.245163223 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:04.255317  356149 docker.go:319] overlay module found
	I1221 20:27:04.256944  356149 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:04.258122  356149 start.go:309] selected driver: docker
	I1221 20:27:04.258135  356149 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:04.258146  356149 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:27:04.258746  356149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:04.313188  356149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:04.304012682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:04.313409  356149 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1221 20:27:04.313445  356149 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1221 20:27:04.313719  356149 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:04.315617  356149 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:04.316685  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:04.316752  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:04.316769  356149 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:04.316847  356149 start.go:353] cluster config:
	{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:04.318044  356149 out.go:179] * Starting "newest-cni-734511" primary control-plane node in "newest-cni-734511" cluster
	I1221 20:27:04.319025  356149 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:04.319999  356149 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:04.320951  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:04.320986  356149 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:04.320999  356149 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:04.321043  356149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:04.321074  356149 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:04.321084  356149 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1221 20:27:04.321164  356149 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:04.321181  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json: {Name:mka6cda6f0218fe0b8ed835e73384be1466cd914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:04.340148  356149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:04.340164  356149 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:27:04.340186  356149 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:04.340217  356149 start.go:360] acquireMachinesLock for newest-cni-734511: {Name:mk73e51f1f54bba023ba70ceb2589863fd06b9dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:27:04.340337  356149 start.go:364] duration metric: took 80.745µs to acquireMachinesLock for "newest-cni-734511"
	I1221 20:27:04.340360  356149 start.go:93] Provisioning new machine with config: &{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:04.340419  356149 start.go:125] createHost starting for "" (driver="docker")
	W1221 20:27:00.711936  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:27:03.210810  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:27:04.712597  345910 pod_ready.go:94] pod "coredns-7d764666f9-wkztz" is "Ready"
	I1221 20:27:04.712638  345910 pod_ready.go:86] duration metric: took 39.007284258s for pod "coredns-7d764666f9-wkztz" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.715404  345910 pod_ready.go:83] waiting for pod "etcd-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.719865  345910 pod_ready.go:94] pod "etcd-no-preload-328404" is "Ready"
	I1221 20:27:04.719886  345910 pod_ready.go:86] duration metric: took 4.454533ms for pod "etcd-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.722758  345910 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.726749  345910 pod_ready.go:94] pod "kube-apiserver-no-preload-328404" is "Ready"
	I1221 20:27:04.726768  345910 pod_ready.go:86] duration metric: took 3.987664ms for pod "kube-apiserver-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.728754  345910 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.909743  345910 pod_ready.go:94] pod "kube-controller-manager-no-preload-328404" is "Ready"
	I1221 20:27:04.909773  345910 pod_ready.go:86] duration metric: took 180.998003ms for pod "kube-controller-manager-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.110503  345910 pod_ready.go:83] waiting for pod "kube-proxy-tnpxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.509849  345910 pod_ready.go:94] pod "kube-proxy-tnpxj" is "Ready"
	I1221 20:27:05.509877  345910 pod_ready.go:86] duration metric: took 399.350496ms for pod "kube-proxy-tnpxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.710358  345910 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:06.109831  345910 pod_ready.go:94] pod "kube-scheduler-no-preload-328404" is "Ready"
	I1221 20:27:06.109858  345910 pod_ready.go:86] duration metric: took 399.475178ms for pod "kube-scheduler-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:06.109870  345910 pod_ready.go:40] duration metric: took 40.408845738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:06.161975  345910 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:06.166771  345910 out.go:179] * Done! kubectl is now configured to use "no-preload-328404" cluster and "default" namespace by default
	I1221 20:27:01.942630  355293 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-766361" ...
	I1221 20:27:01.942690  355293 cli_runner.go:164] Run: docker start default-k8s-diff-port-766361
	I1221 20:27:02.181766  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:02.200499  355293 kic.go:430] container "default-k8s-diff-port-766361" state is running.
	I1221 20:27:02.200866  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:02.221322  355293 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/config.json ...
	I1221 20:27:02.221536  355293 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:02.221591  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:02.240688  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:02.240957  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:02.240973  355293 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:02.241682  355293 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43750->127.0.0.1:33129: read: connection reset by peer
	I1221 20:27:05.381889  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:27:05.381916  355293 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-766361"
	I1221 20:27:05.381967  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.401135  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:05.401433  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:05.401460  355293 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766361 && echo "default-k8s-diff-port-766361" | sudo tee /etc/hostname
	I1221 20:27:05.555524  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:27:05.555604  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.576000  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:05.576357  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:05.576389  355293 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766361/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:05.714615  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:05.714643  355293 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:05.714683  355293 ubuntu.go:190] setting up certificates
	I1221 20:27:05.714693  355293 provision.go:84] configureAuth start
	I1221 20:27:05.714749  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:05.733905  355293 provision.go:143] copyHostCerts
	I1221 20:27:05.734008  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:05.734027  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:05.734108  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:05.734253  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:05.734268  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:05.734313  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:05.734473  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:05.734485  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:05.734515  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:05.734605  355293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766361 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-766361 localhost minikube]
	I1221 20:27:05.885586  355293 provision.go:177] copyRemoteCerts
	I1221 20:27:05.885657  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:05.885704  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.903686  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:06.004376  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:06.022329  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1221 20:27:06.039861  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:06.057192  355293 provision.go:87] duration metric: took 342.475794ms to configureAuth
	I1221 20:27:06.057250  355293 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:06.057479  355293 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:06.057615  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:06.077189  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:06.077572  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:06.077607  355293 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1221 20:27:05.109977  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:27:07.609706  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:27:04.342608  356149 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1221 20:27:04.342833  356149 start.go:159] libmachine.API.Create for "newest-cni-734511" (driver="docker")
	I1221 20:27:04.342865  356149 client.go:173] LocalClient.Create starting
	I1221 20:27:04.342925  356149 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem
	I1221 20:27:04.342953  356149 main.go:144] libmachine: Decoding PEM data...
	I1221 20:27:04.342973  356149 main.go:144] libmachine: Parsing certificate...
	I1221 20:27:04.343034  356149 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem
	I1221 20:27:04.343056  356149 main.go:144] libmachine: Decoding PEM data...
	I1221 20:27:04.343071  356149 main.go:144] libmachine: Parsing certificate...
	I1221 20:27:04.343576  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 20:27:04.359499  356149 cli_runner.go:211] docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 20:27:04.359553  356149 network_create.go:284] running [docker network inspect newest-cni-734511] to gather additional debugging logs...
	I1221 20:27:04.359572  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511
	W1221 20:27:04.375487  356149 cli_runner.go:211] docker network inspect newest-cni-734511 returned with exit code 1
	I1221 20:27:04.375516  356149 network_create.go:287] error running [docker network inspect newest-cni-734511]: docker network inspect newest-cni-734511: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-734511 not found
	I1221 20:27:04.375530  356149 network_create.go:289] output of [docker network inspect newest-cni-734511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-734511 not found
	
	** /stderr **
	I1221 20:27:04.375669  356149 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:04.393047  356149 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f29a930c06e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8b:29:89:af:bd} reservation:<nil>}
	I1221 20:27:04.393765  356149 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ef9486b81b4e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:74:fc:8d:d6:e1} reservation:<nil>}
	I1221 20:27:04.394589  356149 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a8eed82beee6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5a:19:43:42:02:f6} reservation:<nil>}
	I1221 20:27:04.395482  356149 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e58c10}
	I1221 20:27:04.395503  356149 network_create.go:124] attempt to create docker network newest-cni-734511 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1221 20:27:04.395573  356149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-734511 newest-cni-734511
	I1221 20:27:04.440797  356149 network_create.go:108] docker network newest-cni-734511 192.168.76.0/24 created
	I1221 20:27:04.440827  356149 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-734511" container
	I1221 20:27:04.440895  356149 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 20:27:04.457596  356149 cli_runner.go:164] Run: docker volume create newest-cni-734511 --label name.minikube.sigs.k8s.io=newest-cni-734511 --label created_by.minikube.sigs.k8s.io=true
	I1221 20:27:04.474472  356149 oci.go:103] Successfully created a docker volume newest-cni-734511
	I1221 20:27:04.474552  356149 cli_runner.go:164] Run: docker run --rm --name newest-cni-734511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-734511 --entrypoint /usr/bin/test -v newest-cni-734511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1221 20:27:04.874657  356149 oci.go:107] Successfully prepared a docker volume newest-cni-734511
	I1221 20:27:04.874806  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:04.874826  356149 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 20:27:04.874898  356149 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-734511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 20:27:08.234181  356149 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-734511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.359233452s)
	I1221 20:27:08.234217  356149 kic.go:203] duration metric: took 3.359386954s to extract preloaded images to volume ...
	W1221 20:27:08.234353  356149 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1221 20:27:08.234414  356149 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1221 20:27:08.234470  356149 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 20:27:08.295476  356149 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-734511 --name newest-cni-734511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-734511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-734511 --network newest-cni-734511 --ip 192.168.76.2 --volume newest-cni-734511:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1221 20:27:08.565567  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Running}}
	I1221 20:27:08.583983  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.604221  356149 cli_runner.go:164] Run: docker exec newest-cni-734511 stat /var/lib/dpkg/alternatives/iptables
	I1221 20:27:08.654194  356149 oci.go:144] the created container "newest-cni-734511" has a running status.
	I1221 20:27:08.654253  356149 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa...
	I1221 20:27:08.704802  356149 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 20:27:08.732838  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.751273  356149 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 20:27:08.751296  356149 kic_runner.go:114] Args: [docker exec --privileged newest-cni-734511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 20:27:08.793174  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.814689  356149 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:08.814784  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:08.835179  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:08.835685  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:08.835721  356149 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:08.836734  356149 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54514->127.0.0.1:33134: read: connection reset by peer
	I1221 20:27:08.318032  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:08.318063  355293 machine.go:97] duration metric: took 6.096511406s to provisionDockerMachine
	I1221 20:27:08.318079  355293 start.go:293] postStartSetup for "default-k8s-diff-port-766361" (driver="docker")
	I1221 20:27:08.318096  355293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:08.318170  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:08.318243  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.339519  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.441820  355293 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:08.446242  355293 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:08.446278  355293 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:08.446291  355293 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:08.446430  355293 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:08.446568  355293 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:08.446699  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:08.454698  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:08.473177  355293 start.go:296] duration metric: took 155.082818ms for postStartSetup
	I1221 20:27:08.473319  355293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:08.473379  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.492373  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.588791  355293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:08.593998  355293 fix.go:56] duration metric: took 6.67202468s for fixHost
	I1221 20:27:08.594026  355293 start.go:83] releasing machines lock for "default-k8s-diff-port-766361", held for 6.672074779s
	I1221 20:27:08.594093  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:08.614584  355293 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:08.614626  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.614688  355293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:08.614776  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.635066  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.635410  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.798479  355293 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:08.805888  355293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:08.851201  355293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:08.857838  355293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:08.857908  355293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:08.869971  355293 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:27:08.869994  355293 start.go:496] detecting cgroup driver to use...
	I1221 20:27:08.870021  355293 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:08.870056  355293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:08.886198  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:08.900320  355293 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:08.900392  355293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:08.916379  355293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:08.929614  355293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:09.017529  355293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:09.102487  355293 docker.go:234] disabling docker service ...
	I1221 20:27:09.102541  355293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:09.117923  355293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:09.130875  355293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:09.210057  355293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:09.290821  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:09.302670  355293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:09.316043  355293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:09.316090  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.324521  355293 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:09.324576  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.332846  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.340926  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.349091  355293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:09.357325  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.366239  355293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.374613  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.383590  355293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:09.390644  355293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:09.397642  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:09.469485  355293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:09.603676  355293 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:09.603754  355293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:09.608196  355293 start.go:564] Will wait 60s for crictl version
	I1221 20:27:09.608299  355293 ssh_runner.go:195] Run: which crictl
	I1221 20:27:09.611955  355293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:09.635202  355293 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:09.635292  355293 ssh_runner.go:195] Run: crio --version
	I1221 20:27:09.662582  355293 ssh_runner.go:195] Run: crio --version
	I1221 20:27:09.691390  355293 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:27:09.692632  355293 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-766361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:09.713083  355293 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:09.717679  355293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:09.728452  355293 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:09.728580  355293 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:27:09.728646  355293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:09.760480  355293 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:09.760502  355293 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:09.760551  355293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:09.786108  355293 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:09.786130  355293 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:09.786137  355293 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1221 20:27:09.786272  355293 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-766361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:09.786341  355293 ssh_runner.go:195] Run: crio config
	I1221 20:27:09.833071  355293 cni.go:84] Creating CNI manager for ""
	I1221 20:27:09.833099  355293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:09.833112  355293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:27:09.833133  355293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766361 NodeName:default-k8s-diff-port-766361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:09.833275  355293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766361"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:09.833341  355293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:27:09.842261  355293 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:09.842317  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:09.849946  355293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1221 20:27:09.861851  355293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:27:09.873798  355293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1221 20:27:09.886300  355293 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:09.889860  355293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:09.899253  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:09.978391  355293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:10.002606  355293 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361 for IP: 192.168.103.2
	I1221 20:27:10.002626  355293 certs.go:195] generating shared ca certs ...
	I1221 20:27:10.002644  355293 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.002811  355293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:10.002880  355293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:10.002892  355293 certs.go:257] generating profile certs ...
	I1221 20:27:10.003002  355293 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.key
	I1221 20:27:10.003076  355293 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key.07b6dc53
	I1221 20:27:10.003131  355293 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key
	I1221 20:27:10.003288  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:10.003336  355293 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:10.003359  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:10.003393  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:10.003426  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:10.003465  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:10.003533  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:10.004374  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:10.023130  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:10.042080  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:10.062135  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:10.085174  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1221 20:27:10.106654  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:10.126596  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:10.145813  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:10.163770  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:10.180292  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:10.198557  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:10.214868  355293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:10.226847  355293 ssh_runner.go:195] Run: openssl version
	I1221 20:27:10.233097  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.240743  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:10.248144  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.251615  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.251669  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.287002  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:10.294132  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.301357  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:10.308313  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.311705  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.311741  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.346268  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:10.353551  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.360546  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:10.367671  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.371287  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.371336  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.406685  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:10.413819  355293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:10.417462  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:27:10.454011  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:27:10.488179  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:27:10.533872  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:27:10.576052  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:27:10.629693  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:27:10.670862  355293 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:10.670963  355293 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:10.671037  355293 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:10.702259  355293 cri.go:96] found id: "95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7"
	I1221 20:27:10.702282  355293 cri.go:96] found id: "bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6"
	I1221 20:27:10.702285  355293 cri.go:96] found id: "bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0"
	I1221 20:27:10.702295  355293 cri.go:96] found id: "7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f"
	I1221 20:27:10.702298  355293 cri.go:96] found id: ""
	I1221 20:27:10.702339  355293 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:27:10.714908  355293 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:10Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:10.714989  355293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:10.722893  355293 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:27:10.722911  355293 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:27:10.722953  355293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:27:10.730397  355293 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:27:10.731501  355293 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-766361" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:10.732093  355293 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-766361" cluster setting kubeconfig missing "default-k8s-diff-port-766361" context setting]
	I1221 20:27:10.733154  355293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.734776  355293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:27:10.742370  355293 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1221 20:27:10.742398  355293 kubeadm.go:602] duration metric: took 19.480686ms to restartPrimaryControlPlane
	I1221 20:27:10.742407  355293 kubeadm.go:403] duration metric: took 71.557752ms to StartCluster
	I1221 20:27:10.742421  355293 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.742483  355293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:10.744452  355293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.744686  355293 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:10.744774  355293 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:10.744878  355293 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744895  355293 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:10.744908  355293 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744913  355293 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744941  355293 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-766361"
	I1221 20:27:10.744900  355293 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-766361"
	W1221 20:27:10.744955  355293 addons.go:248] addon dashboard should already be in state true
	W1221 20:27:10.744979  355293 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:27:10.744986  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.745018  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.744922  355293 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766361"
	I1221 20:27:10.745404  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.745485  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.745524  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.750065  355293 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:10.751603  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:10.771924  355293 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1221 20:27:10.771928  355293 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:10.773031  355293 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:10.773050  355293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:10.773064  355293 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:27:10.773110  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.773127  355293 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-766361"
	W1221 20:27:10.773144  355293 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:27:10.773173  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.773700  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.774627  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:27:10.774645  355293 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:27:10.774701  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.807788  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.809438  355293 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:10.809458  355293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:10.809514  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.812330  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.832737  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.891658  355293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:10.905174  355293 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:27:10.923657  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:27:10.923678  355293 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:27:10.924773  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:10.938030  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:27:10.938053  355293 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:27:10.947339  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:10.952101  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:27:10.952123  355293 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:27:10.966725  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:27:10.966747  355293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:27:10.982019  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:27:10.982043  355293 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:27:10.996528  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:27:10.996558  355293 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:27:11.009822  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:27:11.009847  355293 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:27:11.022602  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:27:11.022625  355293 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:27:11.034599  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:11.034621  355293 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:27:11.046622  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1221 20:27:09.610037  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:27:12.110288  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:27:12.977615  355293 node_ready.go:49] node "default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:12.977667  355293 node_ready.go:38] duration metric: took 2.072442361s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:27:12.977685  355293 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:12.977831  355293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:13.589060  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664212034s)
	I1221 20:27:13.589105  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.641740556s)
	I1221 20:27:13.589236  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.542570549s)
	I1221 20:27:13.589304  355293 api_server.go:72] duration metric: took 2.844588927s to wait for apiserver process to appear ...
	I1221 20:27:13.589365  355293 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:13.589385  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:13.590939  355293 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-766361 addons enable metrics-server
	
	I1221 20:27:13.594212  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:13.594241  355293 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:13.599341  355293 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1221 20:27:11.977348  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:11.977379  356149 ubuntu.go:182] provisioning hostname "newest-cni-734511"
	I1221 20:27:11.977454  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:11.999751  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:11.999976  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:11.999994  356149 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-734511 && echo "newest-cni-734511" | sudo tee /etc/hostname
	I1221 20:27:12.157144  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:12.157257  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.179924  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:12.180242  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:12.180272  356149 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-734511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-734511/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-734511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:12.325486  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:12.325514  356149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:12.325536  356149 ubuntu.go:190] setting up certificates
	I1221 20:27:12.325549  356149 provision.go:84] configureAuth start
	I1221 20:27:12.325622  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:12.346791  356149 provision.go:143] copyHostCerts
	I1221 20:27:12.346858  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:12.346870  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:12.346953  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:12.347063  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:12.347077  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:12.347117  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:12.347205  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:12.347216  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:12.347269  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:12.347357  356149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.newest-cni-734511 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-734511]
	I1221 20:27:12.416614  356149 provision.go:177] copyRemoteCerts
	I1221 20:27:12.416685  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:12.416736  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.438322  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:12.547462  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:12.566972  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:12.584445  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1221 20:27:12.602292  356149 provision.go:87] duration metric: took 276.731864ms to configureAuth
	I1221 20:27:12.602317  356149 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:12.602481  356149 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:12.602570  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.628085  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:12.628416  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:12.628446  356149 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:27:12.963462  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:12.963499  356149 machine.go:97] duration metric: took 4.148788477s to provisionDockerMachine
	I1221 20:27:12.963511  356149 client.go:176] duration metric: took 8.620635665s to LocalClient.Create
	I1221 20:27:12.963527  356149 start.go:167] duration metric: took 8.620693811s to libmachine.API.Create "newest-cni-734511"
	I1221 20:27:12.963536  356149 start.go:293] postStartSetup for "newest-cni-734511" (driver="docker")
	I1221 20:27:12.963549  356149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:12.963616  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:12.963661  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.994720  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.106837  356149 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:13.112217  356149 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:13.112284  356149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:13.112297  356149 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:13.112360  356149 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:13.112453  356149 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:13.112574  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:13.123914  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:13.152209  356149 start.go:296] duration metric: took 188.649352ms for postStartSetup
	I1221 20:27:13.152586  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:13.174145  356149 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:13.174476  356149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:13.174533  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.195734  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.296538  356149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:13.301216  356149 start.go:128] duration metric: took 8.960783247s to createHost
	I1221 20:27:13.301259  356149 start.go:83] releasing machines lock for "newest-cni-734511", held for 8.96090932s
	I1221 20:27:13.301374  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:13.323173  356149 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:13.323205  356149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:13.323244  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.323280  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.346513  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.347201  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.456203  356149 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:13.536683  356149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:13.585062  356149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:13.590455  356149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:13.590524  356149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:13.622114  356149 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 20:27:13.622139  356149 start.go:496] detecting cgroup driver to use...
	I1221 20:27:13.622174  356149 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:13.622272  356149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:13.639104  356149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:13.651381  356149 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:13.651453  356149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:13.667983  356149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:13.685002  356149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:13.775846  356149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:13.866075  356149 docker.go:234] disabling docker service ...
	I1221 20:27:13.866146  356149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:13.884898  356149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:13.897846  356149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:14.008693  356149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:14.106719  356149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:14.123351  356149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:14.141529  356149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:14.141589  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.153526  356149 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:14.153582  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.164449  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.173423  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.182016  356149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:14.190302  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.198806  356149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.212456  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.221521  356149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:14.228570  356149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:14.235738  356149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:14.317556  356149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:14.455679  356149 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:14.455753  356149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:14.459940  356149 start.go:564] Will wait 60s for crictl version
	I1221 20:27:14.459986  356149 ssh_runner.go:195] Run: which crictl
	I1221 20:27:14.463397  356149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:14.489140  356149 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:14.489245  356149 ssh_runner.go:195] Run: crio --version
	I1221 20:27:14.517363  356149 ssh_runner.go:195] Run: crio --version
	I1221 20:27:14.546070  356149 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1221 20:27:14.547316  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:14.565561  356149 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:14.569784  356149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:14.581403  356149 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1221 20:27:13.608430  349045 pod_ready.go:94] pod "coredns-66bc5c9577-lvwlf" is "Ready"
	I1221 20:27:13.608466  349045 pod_ready.go:86] duration metric: took 34.004349297s for pod "coredns-66bc5c9577-lvwlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.611841  349045 pod_ready.go:83] waiting for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.616529  349045 pod_ready.go:94] pod "etcd-embed-certs-413073" is "Ready"
	I1221 20:27:13.616554  349045 pod_ready.go:86] duration metric: took 4.687623ms for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.618652  349045 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.622524  349045 pod_ready.go:94] pod "kube-apiserver-embed-certs-413073" is "Ready"
	I1221 20:27:13.622543  349045 pod_ready.go:86] duration metric: took 3.869908ms for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.624168  349045 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.809151  349045 pod_ready.go:94] pod "kube-controller-manager-embed-certs-413073" is "Ready"
	I1221 20:27:13.809190  349045 pod_ready.go:86] duration metric: took 184.998965ms for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.007416  349045 pod_ready.go:83] waiting for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.407581  349045 pod_ready.go:94] pod "kube-proxy-qvdzm" is "Ready"
	I1221 20:27:14.407613  349045 pod_ready.go:86] duration metric: took 400.166324ms for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.607762  349045 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:15.007654  349045 pod_ready.go:94] pod "kube-scheduler-embed-certs-413073" is "Ready"
	I1221 20:27:15.007680  349045 pod_ready.go:86] duration metric: took 399.898068ms for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:15.007693  349045 pod_ready.go:40] duration metric: took 35.406275565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:15.061539  349045 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:27:15.063682  349045 out.go:179] * Done! kubectl is now configured to use "embed-certs-413073" cluster and "default" namespace by default
	I1221 20:27:13.600450  355293 addons.go:530] duration metric: took 2.85570077s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:27:14.089929  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:14.094849  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:14.094882  355293 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:14.590379  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:14.595270  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1221 20:27:14.596370  355293 api_server.go:141] control plane version: v1.34.3
	I1221 20:27:14.596406  355293 api_server.go:131] duration metric: took 1.007034338s to wait for apiserver health ...
	I1221 20:27:14.596417  355293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:14.600490  355293 system_pods.go:59] 8 kube-system pods found
	I1221 20:27:14.600533  355293 system_pods.go:61] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:27:14.600546  355293 system_pods.go:61] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:14.600559  355293 system_pods.go:61] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:27:14.600568  355293 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:14.600578  355293 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:14.600589  355293 system_pods.go:61] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:27:14.600597  355293 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:14.600605  355293 system_pods.go:61] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:27:14.600612  355293 system_pods.go:74] duration metric: took 4.188527ms to wait for pod list to return data ...
	I1221 20:27:14.600623  355293 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:14.602947  355293 default_sa.go:45] found service account: "default"
	I1221 20:27:14.602965  355293 default_sa.go:55] duration metric: took 2.335405ms for default service account to be created ...
	I1221 20:27:14.602975  355293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:27:14.605791  355293 system_pods.go:86] 8 kube-system pods found
	I1221 20:27:14.605823  355293 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:27:14.605839  355293 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:14.605850  355293 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:27:14.605863  355293 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:14.605874  355293 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:14.605882  355293 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:27:14.605892  355293 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:14.605900  355293 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:27:14.605908  355293 system_pods.go:126] duration metric: took 2.927241ms to wait for k8s-apps to be running ...
	I1221 20:27:14.605918  355293 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:27:14.605963  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:14.620737  355293 system_svc.go:56] duration metric: took 14.812436ms WaitForService to wait for kubelet
	I1221 20:27:14.620764  355293 kubeadm.go:587] duration metric: took 3.876051255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:27:14.620781  355293 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:14.623820  355293 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:14.623845  355293 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:14.623864  355293 node_conditions.go:105] duration metric: took 3.074979ms to run NodePressure ...
	I1221 20:27:14.623875  355293 start.go:242] waiting for startup goroutines ...
	I1221 20:27:14.623883  355293 start.go:247] waiting for cluster config update ...
	I1221 20:27:14.623893  355293 start.go:256] writing updated cluster config ...
	I1221 20:27:14.624149  355293 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:14.627869  355293 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:14.631173  355293 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:27:16.635807  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:14.582532  356149 kubeadm.go:884] updating cluster {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:14.582720  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:14.582775  356149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:14.616339  356149 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:14.616358  356149 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:14.616398  356149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:14.642742  356149 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:14.642760  356149 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:14.642767  356149 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1221 20:27:14.642856  356149 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-734511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:14.642923  356149 ssh_runner.go:195] Run: crio config
	I1221 20:27:14.689043  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:14.689070  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:14.689084  356149 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1221 20:27:14.689105  356149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-734511 NodeName:newest-cni-734511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:14.689219  356149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-734511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:14.689291  356149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1221 20:27:14.697326  356149 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:14.697381  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:14.705127  356149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:27:14.717405  356149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1221 20:27:14.731759  356149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 20:27:14.743893  356149 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:14.747260  356149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:14.756571  356149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:14.836363  356149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:14.861551  356149 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511 for IP: 192.168.76.2
	I1221 20:27:14.861572  356149 certs.go:195] generating shared ca certs ...
	I1221 20:27:14.861586  356149 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.861730  356149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:14.861776  356149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:14.861786  356149 certs.go:257] generating profile certs ...
	I1221 20:27:14.861838  356149 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key
	I1221 20:27:14.861851  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt with IP's: []
	I1221 20:27:14.969695  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt ...
	I1221 20:27:14.969723  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt: {Name:mk9873aa49abf1e0c21b43fa4eeaac6bd3e5af6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.969891  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key ...
	I1221 20:27:14.969903  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key: {Name:mk54cfa5fdd535a853df99958b13c9506ad5bf8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.969977  356149 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303
	I1221 20:27:14.969991  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1221 20:27:15.023559  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 ...
	I1221 20:27:15.023594  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303: {Name:mkeb8aae65e03e7f80ec0f686fed9ea06cda0c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.023783  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303 ...
	I1221 20:27:15.023802  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303: {Name:mk3d23054258bc709f78fde53bfd58ad79495c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.023909  356149 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt
	I1221 20:27:15.024018  356149 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key
	I1221 20:27:15.024108  356149 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key
	I1221 20:27:15.024137  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt with IP's: []
	I1221 20:27:15.238672  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt ...
	I1221 20:27:15.238700  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt: {Name:mk12ceb8fec2627da1e23919a8ad1b2d47c85a1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.238872  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key ...
	I1221 20:27:15.238890  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key: {Name:mk350b0a8872a865f49a834064f6447e0f7240cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.239094  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:15.239147  356149 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:15.239163  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:15.239199  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:15.239246  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:15.239281  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:15.239343  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:15.239918  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:15.257758  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:15.274862  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:15.292146  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:15.309413  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1221 20:27:15.328072  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:15.349778  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:15.369272  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:15.389257  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:15.409819  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:15.429531  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:15.446818  356149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:15.458998  356149 ssh_runner.go:195] Run: openssl version
	I1221 20:27:15.465312  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.472913  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:15.480737  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.484301  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.484353  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.520431  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:15.528644  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 20:27:15.536038  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.544064  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:15.551906  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.555536  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.555579  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.591848  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:15.599139  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12711.pem /etc/ssl/certs/51391683.0
	I1221 20:27:15.606610  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.613779  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:15.620972  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.625110  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.625149  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.660450  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:15.667624  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127112.pem /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:15.674835  356149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:15.678595  356149 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 20:27:15.678651  356149 kubeadm.go:401] StartCluster: {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:15.678723  356149 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:15.678765  356149 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:15.708139  356149 cri.go:96] found id: ""
	I1221 20:27:15.708254  356149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:15.717705  356149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 20:27:15.726595  356149 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 20:27:15.726664  356149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 20:27:15.735640  356149 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 20:27:15.735658  356149 kubeadm.go:158] found existing configuration files:
	
	I1221 20:27:15.735693  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 20:27:15.743487  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 20:27:15.743528  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 20:27:15.750424  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 20:27:15.757426  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 20:27:15.757476  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 20:27:15.764200  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 20:27:15.771497  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 20:27:15.771543  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 20:27:15.778713  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 20:27:15.786060  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 20:27:15.786104  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 20:27:15.793154  356149 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 20:27:15.895321  356149 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 20:27:15.954184  356149 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1221 20:27:18.637834  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:21.137485  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 21 20:26:45 no-preload-328404 crio[571]: time="2025-12-21T20:26:45.998618052Z" level=info msg="Started container" PID=1767 containerID=51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper id=e724546c-ed80-49df-9a20-654712beacd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=250da8813d43ecfce0ead723dbf2a57ad0714de4dfc0ed4d35b89967335e3466
	Dec 21 20:26:46 no-preload-328404 crio[571]: time="2025-12-21T20:26:46.042425346Z" level=info msg="Removing container: 3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227" id=3824e963-9b15-4901-ae90-e6254748dc4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:46 no-preload-328404 crio[571]: time="2025-12-21T20:26:46.055424984Z" level=info msg="Removed container 3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=3824e963-9b15-4901-ae90-e6254748dc4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.069054767Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8d6de08f-1c24-455b-8197-b8b14f3c4744 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.069983511Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d1a3bf5f-c2db-448b-8010-cc4ac8c15f52 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.07105782Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b2474010-a7cd-44ab-9959-bf11f1e62008 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.071194691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075287072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075491703Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0ca256ac5746b768ed132c6a8c9e6a183d68b5788d7712830f919b20144bb3ac/merged/etc/passwd: no such file or directory"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075525609Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0ca256ac5746b768ed132c6a8c9e6a183d68b5788d7712830f919b20144bb3ac/merged/etc/group: no such file or directory"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075841569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.111937843Z" level=info msg="Created container c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f: kube-system/storage-provisioner/storage-provisioner" id=b2474010-a7cd-44ab-9959-bf11f1e62008 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.112562491Z" level=info msg="Starting container: c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f" id=05ca49fd-5684-4aa2-bf70-9b2cfc2ef725 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.114436987Z" level=info msg="Started container" PID=1781 containerID=c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f description=kube-system/storage-provisioner/storage-provisioner id=05ca49fd-5684-4aa2-bf70-9b2cfc2ef725 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7c37b55873dbe9cc67a1f2075ae9788058e6e961fdc725f061ede812e459702
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.942779088Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3013ff2d-b0f5-442c-8388-e524c3f8eec7 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.95884792Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ec3ede6d-426c-413b-aa85-17828d597b32 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.95993139Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=db628ac5-c0ab-46d6-85b3-ec03a45e805e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.960059956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.010596172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.011262728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.155178823Z" level=info msg="Created container 35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=db628ac5-c0ab-46d6-85b3-ec03a45e805e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.155875688Z" level=info msg="Starting container: 35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943" id=1a92ff72-636f-4314-8128-b75a09bf2222 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.158308746Z" level=info msg="Started container" PID=1817 containerID=35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper id=1a92ff72-636f-4314-8128-b75a09bf2222 name=/runtime.v1.RuntimeService/StartContainer sandboxID=250da8813d43ecfce0ead723dbf2a57ad0714de4dfc0ed4d35b89967335e3466
	Dec 21 20:27:08 no-preload-328404 crio[571]: time="2025-12-21T20:27:08.10381159Z" level=info msg="Removing container: 51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db" id=8f368b88-4b15-4a9f-beb4-70c0c40ab752 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:08 no-preload-328404 crio[571]: time="2025-12-21T20:27:08.216601693Z" level=info msg="Removed container 51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=8f368b88-4b15-4a9f-beb4-70c0c40ab752 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	35aa8d65c0fad       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   3                   250da8813d43e       dashboard-metrics-scraper-867fb5f87b-dlspk   kubernetes-dashboard
	c4a3bf64a4312       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   c7c37b55873db       storage-provisioner                          kube-system
	bbbb335edc1a3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   51 seconds ago       Running             kubernetes-dashboard        0                   60e40d4d17c83       kubernetes-dashboard-b84665fb8-gndgj         kubernetes-dashboard
	a084a6826d154       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           57 seconds ago       Running             coredns                     0                   2eee96b0c663f       coredns-7d764666f9-wkztz                     kube-system
	d9cd4ed4c93bf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   42a3b2be21ff3       busybox                                      default
	fe09ae4da8b24       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           57 seconds ago       Running             kube-proxy                  0                   030032d599aab       kube-proxy-tnpxj                             kube-system
	f04b47e9dcfc5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   c7c37b55873db       storage-provisioner                          kube-system
	3595d41486618       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 0                   048e502213a22       kindnet-txb2h                                kube-system
	bcac2e4233e07       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        0                   95f7a4db0edb7       etcd-no-preload-328404                       kube-system
	0046d150fd039       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           About a minute ago   Running             kube-apiserver              0                   d5c7c995ad30d       kube-apiserver-no-preload-328404             kube-system
	98be72f58d134       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           About a minute ago   Running             kube-scheduler              0                   60ec47faed9d2       kube-scheduler-no-preload-328404             kube-system
	d787f2902ce77       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           About a minute ago   Running             kube-controller-manager     0                   ac481f4d12bad       kube-controller-manager-no-preload-328404    kube-system
	
	
	==> coredns [a084a6826d154a385bde8864d163a3902fe32cf3e04525a973b1d6149ec59316] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49327 - 57943 "HINFO IN 8871514818096014852.412357642826896072. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.085449586s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-328404
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-328404
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=no-preload-328404
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-328404
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-328404
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                1bc220dc-568c-47a3-81e8-8d8a8f6c7b02
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-7d764666f9-wkztz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-no-preload-328404                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-txb2h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-328404              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-328404     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-tnpxj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-328404              100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-dlspk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-gndgj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node no-preload-328404 event: Registered Node no-preload-328404 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-328404 event: Registered Node no-preload-328404 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [bcac2e4233e078a1060d7687fd886835bcd161ef64c6969c34d2fca692733dca] <==
	{"level":"info","ts":"2025-12-21T20:26:22.581997Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-21T20:26:22.582383Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-21T20:26:22.582798Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-21T20:26:23.064179Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:23.064265Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:23.064342Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:23.064365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:26:23.064386Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.064882Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.064918Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:26:23.064941Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.064953Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.065760Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-328404 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-21T20:26:23.066022Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:23.066314Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:23.066392Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:23.066708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:23.068205Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:26:23.069356Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:26:23.075116Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-21T20:26:23.075130Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:26:27.836401Z","caller":"traceutil/trace.go:172","msg":"trace[1525521691] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"105.062505ms","start":"2025-12-21T20:26:27.731316Z","end":"2025-12-21T20:26:27.836379Z","steps":["trace[1525521691] 'process raft request'  (duration: 104.827672ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:26:27.854192Z","caller":"traceutil/trace.go:172","msg":"trace[1721586658] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"122.816091ms","start":"2025-12-21T20:26:27.731355Z","end":"2025-12-21T20:26:27.854171Z","steps":["trace[1721586658] 'process raft request'  (duration: 122.732382ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:27:07.298754Z","caller":"traceutil/trace.go:172","msg":"trace[1029984313] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"139.266509ms","start":"2025-12-21T20:27:07.159465Z","end":"2025-12-21T20:27:07.298731Z","steps":["trace[1029984313] 'process raft request'  (duration: 139.138091ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:27:07.450391Z","caller":"traceutil/trace.go:172","msg":"trace[1070794006] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"146.934517ms","start":"2025-12-21T20:27:07.303435Z","end":"2025-12-21T20:27:07.450369Z","steps":["trace[1070794006] 'process raft request'  (duration: 126.441341ms)","trace[1070794006] 'compare'  (duration: 20.385564ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:27:23 up  1:09,  0 user,  load average: 4.51, 3.95, 2.81
	Linux no-preload-328404 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3595d41486618c410928433b6dcd88e3aa2dbd3baaf61cacd454477205ba2b3b] <==
	I1221 20:26:25.555053       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:26:25.555477       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1221 20:26:25.555658       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:26:25.555686       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:26:25.555712       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:26:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:26:25.764948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:26:25.764977       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:26:25.764990       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:26:25.765121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:26:26.065204       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:26:26.065269       1 metrics.go:72] Registering metrics
	I1221 20:26:26.065355       1 controller.go:711] "Syncing nftables rules"
	I1221 20:26:35.765377       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:26:35.765460       1 main.go:301] handling current node
	I1221 20:26:45.768002       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:26:45.768070       1 main.go:301] handling current node
	I1221 20:26:55.765412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:26:55.765451       1 main.go:301] handling current node
	I1221 20:27:05.764995       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:27:05.765036       1 main.go:301] handling current node
	I1221 20:27:15.766331       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:27:15.766365       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0046d150fd03984c5a267cbb1a42d7e283f30f63ee5bd302b5ebad1dce9150cf] <==
	I1221 20:26:24.205541       1 cache.go:39] Caches are synced for autoregister controller
	I1221 20:26:24.205699       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:24.205749       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1221 20:26:24.205765       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1221 20:26:24.206073       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:24.206106       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1221 20:26:24.206408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:26:24.211190       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1221 20:26:24.212786       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:26:24.218204       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:26:24.258292       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1221 20:26:24.265519       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:24.265540       1 policy_source.go:248] refreshing policies
	I1221 20:26:24.271417       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:26:24.469592       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:26:24.495078       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:26:24.512197       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:26:24.519127       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:26:24.524535       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:26:24.556881       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.163.136"}
	I1221 20:26:24.566781       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.20.190"}
	I1221 20:26:25.109714       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:26:27.730781       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:26:27.840741       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:26:27.858538       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d787f2902ce772055519660b7118e43b95c26d99a74f299380f021e62851e5d2] <==
	I1221 20:26:27.337849       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337798       1 range_allocator.go:177] "Sending events to api server"
	I1221 20:26:27.337906       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337916       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1221 20:26:27.337923       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337934       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337851       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.338055       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337923       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:27.338133       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.338880       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339308       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339270       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339248       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339279       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339289       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.340498       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:27.342031       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.343266       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.358094       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.438359       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.438952       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:26:27.438990       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:26:27.440964       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.869653       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [fe09ae4da8b24cd8e37c5e7ad994eef35649b944e8c085a4bbe2da7544aa431c] <==
	I1221 20:26:25.356700       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:26:25.441182       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:25.541898       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:25.541955       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1221 20:26:25.542206       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:26:25.564596       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:26:25.564669       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:26:25.570852       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:26:25.571638       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:26:25.571672       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:25.574557       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:26:25.574736       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:26:25.574643       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:26:25.575360       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:26:25.575385       1 config.go:309] "Starting node config controller"
	I1221 20:26:25.575390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:26:25.575410       1 config.go:200] "Starting service config controller"
	I1221 20:26:25.575422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:26:25.675529       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:26:25.675588       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:26:25.675602       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:26:25.675615       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [98be72f58d13404328401992ab2e7394515b18e5e27627b5c20db8e2982872e6] <==
	I1221 20:26:23.018351       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:26:24.114484       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:26:24.114526       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:26:24.114537       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:26:24.114547       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:26:24.191081       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:26:24.191117       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:24.193860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:24.193904       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:24.194038       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:26:24.195058       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:26:24.295027       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 21 20:26:45 no-preload-328404 kubelet[724]: E1221 20:26:45.942614     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:26:45 no-preload-328404 kubelet[724]: I1221 20:26:45.942661     724 scope.go:122] "RemoveContainer" containerID="3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: I1221 20:26:46.040844     724 scope.go:122] "RemoveContainer" containerID="3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: E1221 20:26:46.041158     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: I1221 20:26:46.041193     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: E1221 20:26:46.041437     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:26:52 no-preload-328404 kubelet[724]: E1221 20:26:52.623658     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:26:52 no-preload-328404 kubelet[724]: I1221 20:26:52.623696     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:26:52 no-preload-328404 kubelet[724]: E1221 20:26:52.623856     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:26:56 no-preload-328404 kubelet[724]: I1221 20:26:56.068589     724 scope.go:122] "RemoveContainer" containerID="f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8"
	Dec 21 20:27:04 no-preload-328404 kubelet[724]: E1221 20:27:04.554516     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wkztz" containerName="coredns"
	Dec 21 20:27:06 no-preload-328404 kubelet[724]: E1221 20:27:06.942283     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:27:06 no-preload-328404 kubelet[724]: I1221 20:27:06.942319     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: I1221 20:27:08.102442     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: E1221 20:27:08.102637     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: I1221 20:27:08.102667     724 scope.go:122] "RemoveContainer" containerID="35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: E1221 20:27:08.102850     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:27:12 no-preload-328404 kubelet[724]: E1221 20:27:12.623407     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:27:12 no-preload-328404 kubelet[724]: I1221 20:27:12.623466     724 scope.go:122] "RemoveContainer" containerID="35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	Dec 21 20:27:12 no-preload-328404 kubelet[724]: E1221 20:27:12.624077     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:27:19 no-preload-328404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:19 no-preload-328404 kubelet[724]: I1221 20:27:19.438112     724 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 21 20:27:19 no-preload-328404 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:19 no-preload-328404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:27:19 no-preload-328404 systemd[1]: kubelet.service: Consumed 1.846s CPU time.
	
	
	==> kubernetes-dashboard [bbbb335edc1a37bba1da0a6728be1871809e0281aea068022ebe44b162ab9011] <==
	2025/12/21 20:26:31 Using namespace: kubernetes-dashboard
	2025/12/21 20:26:31 Using in-cluster config to connect to apiserver
	2025/12/21 20:26:31 Using secret token for csrf signing
	2025/12/21 20:26:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:26:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:26:31 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/21 20:26:31 Generating JWE encryption key
	2025/12/21 20:26:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:26:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:26:31 Initializing JWE encryption key from synchronized object
	2025/12/21 20:26:31 Creating in-cluster Sidecar client
	2025/12/21 20:26:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:31 Serving insecurely on HTTP port: 9090
	2025/12/21 20:27:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:31 Starting overwatch
	
	
	==> storage-provisioner [c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f] <==
	I1221 20:26:56.133498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:26:56.133539       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:26:56.135349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:59.589850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:03.849822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:07.451493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:10.505057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.527604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.532497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:13.532719       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:27:13.532904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-328404_cb32c0ce-62c8-47c8-b0d3-fabaa2857f9f!
	I1221 20:27:13.532900       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf33a741-9273-4d62-a26d-92d41502a937", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-328404_cb32c0ce-62c8-47c8-b0d3-fabaa2857f9f became leader
	W1221 20:27:13.535428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.541029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:13.633171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-328404_cb32c0ce-62c8-47c8-b0d3-fabaa2857f9f!
	W1221 20:27:15.543997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:15.547901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:17.551881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:17.556688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:19.559956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:19.563856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:21.567399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:21.572350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:23.574804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:23.643215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8] <==
	I1221 20:26:25.314084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:26:55.318683       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328404 -n no-preload-328404
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328404 -n no-preload-328404: exit status 2 (329.162853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-328404 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328404
helpers_test.go:244: (dbg) docker inspect no-preload-328404:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c",
	        "Created": "2025-12-21T20:24:59.700822041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:26:15.483158893Z",
	            "FinishedAt": "2025-12-21T20:26:14.568925545Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/hosts",
	        "LogPath": "/var/lib/docker/containers/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c/15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c-json.log",
	        "Name": "/no-preload-328404",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328404:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-328404",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "15210117610bb1d7e689ccf43b58c413e6c46bf69cdf323150333e4817146a0c",
	                "LowerDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c3c186ce969354898e22c123f1d07ef9ca3cedf18571845d4a263f679c4bebe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328404",
	                "Source": "/var/lib/docker/volumes/no-preload-328404/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328404",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328404",
	                "name.minikube.sigs.k8s.io": "no-preload-328404",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cfcc5c15226dda23af07b5a10b10cf21180f51c42f47b1650e69d3ce1c72b866",
	            "SandboxKey": "/var/run/docker/netns/cfcc5c15226d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-328404": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3825326ac2cef213f4d7f258fd319688605c412ad1609130b5a218375fcefc22",
	                    "EndpointID": "23432b4aad4f93e78932ec14303a64217464897893f46c22c3f8e7739b4b0db7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:ed:d4:29:81:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328404",
	                        "15210117610b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404
E1221 20:27:24.643922   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:24.649251   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:24.659527   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:24.679787   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:24.720081   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404: exit status 2 (318.856255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328404 logs -n 25
E1221 20:27:24.800723   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:24.960913   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:25.281420   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-328404 logs -n 25: (1.101907581s)
E1221 20:27:25.921768   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ -p bridge-149976 sudo crio config                                                                                                                                                                                                                  │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p bridge-149976                                                                                                                                                                                                                                   │ bridge-149976                │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ delete  │ -p disable-driver-mounts-903813                                                                                                                                                                                                                    │ disable-driver-mounts-903813 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p no-preload-328404 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-766361 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ old-k8s-version-699289 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:04.161028  356149 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:04.161303  356149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:04.161311  356149 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:04.161315  356149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:04.161505  356149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:04.161969  356149 out.go:368] Setting JSON to false
	I1221 20:27:04.163121  356149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4173,"bootTime":1766344651,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:04.163191  356149 start.go:143] virtualization: kvm guest
	I1221 20:27:04.165113  356149 out.go:179] * [newest-cni-734511] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:04.166326  356149 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:04.166322  356149 notify.go:221] Checking for updates...
	I1221 20:27:04.168489  356149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:04.169743  356149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:04.170878  356149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:04.171920  356149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:04.172960  356149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:04.174444  356149 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:04.174550  356149 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:04.174656  356149 config.go:182] Loaded profile config "no-preload-328404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:04.174752  356149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:04.199923  356149 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:04.200099  356149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:04.255148  356149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:04.245163223 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:04.255317  356149 docker.go:319] overlay module found
	I1221 20:27:04.256944  356149 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:04.258122  356149 start.go:309] selected driver: docker
	I1221 20:27:04.258135  356149 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:04.258146  356149 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:27:04.258746  356149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:04.313188  356149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:04.304012682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:04.313409  356149 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1221 20:27:04.313445  356149 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1221 20:27:04.313719  356149 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:04.315617  356149 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:04.316685  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:04.316752  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:04.316769  356149 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:04.316847  356149 start.go:353] cluster config:
	{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:04.318044  356149 out.go:179] * Starting "newest-cni-734511" primary control-plane node in "newest-cni-734511" cluster
	I1221 20:27:04.319025  356149 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:04.319999  356149 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:04.320951  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:04.320986  356149 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:04.320999  356149 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:04.321043  356149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:04.321074  356149 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:04.321084  356149 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1221 20:27:04.321164  356149 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:04.321181  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json: {Name:mka6cda6f0218fe0b8ed835e73384be1466cd914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:04.340148  356149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:04.340164  356149 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:27:04.340186  356149 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:04.340217  356149 start.go:360] acquireMachinesLock for newest-cni-734511: {Name:mk73e51f1f54bba023ba70ceb2589863fd06b9dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:27:04.340337  356149 start.go:364] duration metric: took 80.745µs to acquireMachinesLock for "newest-cni-734511"
	I1221 20:27:04.340360  356149 start.go:93] Provisioning new machine with config: &{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:04.340419  356149 start.go:125] createHost starting for "" (driver="docker")
	W1221 20:27:00.711936  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:27:03.210810  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:27:04.712597  345910 pod_ready.go:94] pod "coredns-7d764666f9-wkztz" is "Ready"
	I1221 20:27:04.712638  345910 pod_ready.go:86] duration metric: took 39.007284258s for pod "coredns-7d764666f9-wkztz" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.715404  345910 pod_ready.go:83] waiting for pod "etcd-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.719865  345910 pod_ready.go:94] pod "etcd-no-preload-328404" is "Ready"
	I1221 20:27:04.719886  345910 pod_ready.go:86] duration metric: took 4.454533ms for pod "etcd-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.722758  345910 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.726749  345910 pod_ready.go:94] pod "kube-apiserver-no-preload-328404" is "Ready"
	I1221 20:27:04.726768  345910 pod_ready.go:86] duration metric: took 3.987664ms for pod "kube-apiserver-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.728754  345910 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.909743  345910 pod_ready.go:94] pod "kube-controller-manager-no-preload-328404" is "Ready"
	I1221 20:27:04.909773  345910 pod_ready.go:86] duration metric: took 180.998003ms for pod "kube-controller-manager-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.110503  345910 pod_ready.go:83] waiting for pod "kube-proxy-tnpxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.509849  345910 pod_ready.go:94] pod "kube-proxy-tnpxj" is "Ready"
	I1221 20:27:05.509877  345910 pod_ready.go:86] duration metric: took 399.350496ms for pod "kube-proxy-tnpxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.710358  345910 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:06.109831  345910 pod_ready.go:94] pod "kube-scheduler-no-preload-328404" is "Ready"
	I1221 20:27:06.109858  345910 pod_ready.go:86] duration metric: took 399.475178ms for pod "kube-scheduler-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:06.109870  345910 pod_ready.go:40] duration metric: took 40.408845738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:06.161975  345910 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:06.166771  345910 out.go:179] * Done! kubectl is now configured to use "no-preload-328404" cluster and "default" namespace by default
	I1221 20:27:01.942630  355293 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-766361" ...
	I1221 20:27:01.942690  355293 cli_runner.go:164] Run: docker start default-k8s-diff-port-766361
	I1221 20:27:02.181766  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:02.200499  355293 kic.go:430] container "default-k8s-diff-port-766361" state is running.
	I1221 20:27:02.200866  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:02.221322  355293 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/config.json ...
	I1221 20:27:02.221536  355293 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:02.221591  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:02.240688  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:02.240957  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:02.240973  355293 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:02.241682  355293 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43750->127.0.0.1:33129: read: connection reset by peer
	I1221 20:27:05.381889  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:27:05.381916  355293 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-766361"
	I1221 20:27:05.381967  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.401135  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:05.401433  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:05.401460  355293 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766361 && echo "default-k8s-diff-port-766361" | sudo tee /etc/hostname
	I1221 20:27:05.555524  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:27:05.555604  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.576000  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:05.576357  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:05.576389  355293 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766361/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:05.714615  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:05.714643  355293 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:05.714683  355293 ubuntu.go:190] setting up certificates
	I1221 20:27:05.714693  355293 provision.go:84] configureAuth start
	I1221 20:27:05.714749  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:05.733905  355293 provision.go:143] copyHostCerts
	I1221 20:27:05.734008  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:05.734027  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:05.734108  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:05.734253  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:05.734268  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:05.734313  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:05.734473  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:05.734485  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:05.734515  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:05.734605  355293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766361 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-766361 localhost minikube]
	I1221 20:27:05.885586  355293 provision.go:177] copyRemoteCerts
	I1221 20:27:05.885657  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:05.885704  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.903686  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:06.004376  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:06.022329  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1221 20:27:06.039861  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:06.057192  355293 provision.go:87] duration metric: took 342.475794ms to configureAuth
	I1221 20:27:06.057250  355293 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:06.057479  355293 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:06.057615  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:06.077189  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:06.077572  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:06.077607  355293 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1221 20:27:05.109977  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:27:07.609706  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:27:04.342608  356149 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1221 20:27:04.342833  356149 start.go:159] libmachine.API.Create for "newest-cni-734511" (driver="docker")
	I1221 20:27:04.342865  356149 client.go:173] LocalClient.Create starting
	I1221 20:27:04.342925  356149 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem
	I1221 20:27:04.342953  356149 main.go:144] libmachine: Decoding PEM data...
	I1221 20:27:04.342973  356149 main.go:144] libmachine: Parsing certificate...
	I1221 20:27:04.343034  356149 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem
	I1221 20:27:04.343056  356149 main.go:144] libmachine: Decoding PEM data...
	I1221 20:27:04.343071  356149 main.go:144] libmachine: Parsing certificate...
	I1221 20:27:04.343576  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 20:27:04.359499  356149 cli_runner.go:211] docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 20:27:04.359553  356149 network_create.go:284] running [docker network inspect newest-cni-734511] to gather additional debugging logs...
	I1221 20:27:04.359572  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511
	W1221 20:27:04.375487  356149 cli_runner.go:211] docker network inspect newest-cni-734511 returned with exit code 1
	I1221 20:27:04.375516  356149 network_create.go:287] error running [docker network inspect newest-cni-734511]: docker network inspect newest-cni-734511: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-734511 not found
	I1221 20:27:04.375530  356149 network_create.go:289] output of [docker network inspect newest-cni-734511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-734511 not found
	
	** /stderr **
	I1221 20:27:04.375669  356149 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:04.393047  356149 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f29a930c06e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8b:29:89:af:bd} reservation:<nil>}
	I1221 20:27:04.393765  356149 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ef9486b81b4e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:74:fc:8d:d6:e1} reservation:<nil>}
	I1221 20:27:04.394589  356149 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a8eed82beee6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5a:19:43:42:02:f6} reservation:<nil>}
	I1221 20:27:04.395482  356149 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e58c10}
	I1221 20:27:04.395503  356149 network_create.go:124] attempt to create docker network newest-cni-734511 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1221 20:27:04.395573  356149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-734511 newest-cni-734511
	I1221 20:27:04.440797  356149 network_create.go:108] docker network newest-cni-734511 192.168.76.0/24 created
	I1221 20:27:04.440827  356149 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-734511" container
	I1221 20:27:04.440895  356149 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 20:27:04.457596  356149 cli_runner.go:164] Run: docker volume create newest-cni-734511 --label name.minikube.sigs.k8s.io=newest-cni-734511 --label created_by.minikube.sigs.k8s.io=true
	I1221 20:27:04.474472  356149 oci.go:103] Successfully created a docker volume newest-cni-734511
	I1221 20:27:04.474552  356149 cli_runner.go:164] Run: docker run --rm --name newest-cni-734511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-734511 --entrypoint /usr/bin/test -v newest-cni-734511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1221 20:27:04.874657  356149 oci.go:107] Successfully prepared a docker volume newest-cni-734511
	I1221 20:27:04.874806  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:04.874826  356149 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 20:27:04.874898  356149 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-734511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 20:27:08.234181  356149 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-734511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.359233452s)
	I1221 20:27:08.234217  356149 kic.go:203] duration metric: took 3.359386954s to extract preloaded images to volume ...
	W1221 20:27:08.234353  356149 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1221 20:27:08.234414  356149 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1221 20:27:08.234470  356149 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 20:27:08.295476  356149 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-734511 --name newest-cni-734511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-734511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-734511 --network newest-cni-734511 --ip 192.168.76.2 --volume newest-cni-734511:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1221 20:27:08.565567  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Running}}
	I1221 20:27:08.583983  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.604221  356149 cli_runner.go:164] Run: docker exec newest-cni-734511 stat /var/lib/dpkg/alternatives/iptables
	I1221 20:27:08.654194  356149 oci.go:144] the created container "newest-cni-734511" has a running status.
	I1221 20:27:08.654253  356149 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa...
	I1221 20:27:08.704802  356149 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 20:27:08.732838  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.751273  356149 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 20:27:08.751296  356149 kic_runner.go:114] Args: [docker exec --privileged newest-cni-734511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 20:27:08.793174  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.814689  356149 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:08.814784  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:08.835179  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:08.835685  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:08.835721  356149 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:08.836734  356149 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54514->127.0.0.1:33134: read: connection reset by peer
	I1221 20:27:08.318032  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:08.318063  355293 machine.go:97] duration metric: took 6.096511406s to provisionDockerMachine
	I1221 20:27:08.318079  355293 start.go:293] postStartSetup for "default-k8s-diff-port-766361" (driver="docker")
	I1221 20:27:08.318096  355293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:08.318170  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:08.318243  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.339519  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.441820  355293 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:08.446242  355293 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:08.446278  355293 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:08.446291  355293 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:08.446430  355293 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:08.446568  355293 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:08.446699  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:08.454698  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:08.473177  355293 start.go:296] duration metric: took 155.082818ms for postStartSetup
	I1221 20:27:08.473319  355293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:08.473379  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.492373  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.588791  355293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:08.593998  355293 fix.go:56] duration metric: took 6.67202468s for fixHost
	I1221 20:27:08.594026  355293 start.go:83] releasing machines lock for "default-k8s-diff-port-766361", held for 6.672074779s
	I1221 20:27:08.594093  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:08.614584  355293 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:08.614626  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.614688  355293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:08.614776  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.635066  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.635410  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.798479  355293 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:08.805888  355293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:08.851201  355293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:08.857838  355293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:08.857908  355293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:08.869971  355293 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:27:08.869994  355293 start.go:496] detecting cgroup driver to use...
	I1221 20:27:08.870021  355293 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:08.870056  355293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:08.886198  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:08.900320  355293 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:08.900392  355293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:08.916379  355293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:08.929614  355293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:09.017529  355293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:09.102487  355293 docker.go:234] disabling docker service ...
	I1221 20:27:09.102541  355293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:09.117923  355293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:09.130875  355293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:09.210057  355293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:09.290821  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:09.302670  355293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:09.316043  355293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:09.316090  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.324521  355293 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:09.324576  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.332846  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.340926  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.349091  355293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:09.357325  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.366239  355293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.374613  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.383590  355293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:09.390644  355293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:09.397642  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:09.469485  355293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:09.603676  355293 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:09.603754  355293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:09.608196  355293 start.go:564] Will wait 60s for crictl version
	I1221 20:27:09.608299  355293 ssh_runner.go:195] Run: which crictl
	I1221 20:27:09.611955  355293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:09.635202  355293 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:09.635292  355293 ssh_runner.go:195] Run: crio --version
	I1221 20:27:09.662582  355293 ssh_runner.go:195] Run: crio --version
	I1221 20:27:09.691390  355293 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:27:09.692632  355293 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-766361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:09.713083  355293 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:09.717679  355293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:09.728452  355293 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:09.728580  355293 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:27:09.728646  355293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:09.760480  355293 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:09.760502  355293 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:09.760551  355293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:09.786108  355293 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:09.786130  355293 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:09.786137  355293 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1221 20:27:09.786272  355293 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-766361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:09.786341  355293 ssh_runner.go:195] Run: crio config
	I1221 20:27:09.833071  355293 cni.go:84] Creating CNI manager for ""
	I1221 20:27:09.833099  355293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:09.833112  355293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:27:09.833133  355293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766361 NodeName:default-k8s-diff-port-766361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:09.833275  355293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766361"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:09.833341  355293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:27:09.842261  355293 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:09.842317  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:09.849946  355293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1221 20:27:09.861851  355293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:27:09.873798  355293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1221 20:27:09.886300  355293 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:09.889860  355293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:09.899253  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:09.978391  355293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:10.002606  355293 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361 for IP: 192.168.103.2
	I1221 20:27:10.002626  355293 certs.go:195] generating shared ca certs ...
	I1221 20:27:10.002644  355293 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.002811  355293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:10.002880  355293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:10.002892  355293 certs.go:257] generating profile certs ...
	I1221 20:27:10.003002  355293 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.key
	I1221 20:27:10.003076  355293 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key.07b6dc53
	I1221 20:27:10.003131  355293 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key
	I1221 20:27:10.003288  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:10.003336  355293 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:10.003359  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:10.003393  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:10.003426  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:10.003465  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:10.003533  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:10.004374  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:10.023130  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:10.042080  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:10.062135  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:10.085174  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1221 20:27:10.106654  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:10.126596  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:10.145813  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:10.163770  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:10.180292  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:10.198557  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:10.214868  355293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:10.226847  355293 ssh_runner.go:195] Run: openssl version
	I1221 20:27:10.233097  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.240743  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:10.248144  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.251615  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.251669  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.287002  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:10.294132  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.301357  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:10.308313  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.311705  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.311741  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.346268  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:10.353551  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.360546  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:10.367671  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.371287  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.371336  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.406685  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:10.413819  355293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:10.417462  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:27:10.454011  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:27:10.488179  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:27:10.533872  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:27:10.576052  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:27:10.629693  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:27:10.670862  355293 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:10.670963  355293 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:10.671037  355293 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:10.702259  355293 cri.go:96] found id: "95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7"
	I1221 20:27:10.702282  355293 cri.go:96] found id: "bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6"
	I1221 20:27:10.702285  355293 cri.go:96] found id: "bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0"
	I1221 20:27:10.702295  355293 cri.go:96] found id: "7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f"
	I1221 20:27:10.702298  355293 cri.go:96] found id: ""
	I1221 20:27:10.702339  355293 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:27:10.714908  355293 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:10Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:10.714989  355293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:10.722893  355293 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:27:10.722911  355293 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:27:10.722953  355293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:27:10.730397  355293 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:27:10.731501  355293 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-766361" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:10.732093  355293 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-766361" cluster setting kubeconfig missing "default-k8s-diff-port-766361" context setting]
	I1221 20:27:10.733154  355293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.734776  355293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:27:10.742370  355293 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1221 20:27:10.742398  355293 kubeadm.go:602] duration metric: took 19.480686ms to restartPrimaryControlPlane
	I1221 20:27:10.742407  355293 kubeadm.go:403] duration metric: took 71.557752ms to StartCluster
	I1221 20:27:10.742421  355293 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.742483  355293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:10.744452  355293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.744686  355293 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:10.744774  355293 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:10.744878  355293 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744895  355293 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:10.744908  355293 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744913  355293 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744941  355293 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-766361"
	I1221 20:27:10.744900  355293 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-766361"
	W1221 20:27:10.744955  355293 addons.go:248] addon dashboard should already be in state true
	W1221 20:27:10.744979  355293 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:27:10.744986  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.745018  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.744922  355293 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766361"
	I1221 20:27:10.745404  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.745485  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.745524  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.750065  355293 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:10.751603  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:10.771924  355293 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1221 20:27:10.771928  355293 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:10.773031  355293 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:10.773050  355293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:10.773064  355293 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:27:10.773110  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.773127  355293 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-766361"
	W1221 20:27:10.773144  355293 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:27:10.773173  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.773700  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.774627  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:27:10.774645  355293 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:27:10.774701  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.807788  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.809438  355293 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:10.809458  355293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:10.809514  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.812330  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.832737  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.891658  355293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:10.905174  355293 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:27:10.923657  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:27:10.923678  355293 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:27:10.924773  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:10.938030  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:27:10.938053  355293 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:27:10.947339  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:10.952101  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:27:10.952123  355293 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:27:10.966725  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:27:10.966747  355293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:27:10.982019  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:27:10.982043  355293 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:27:10.996528  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:27:10.996558  355293 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:27:11.009822  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:27:11.009847  355293 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:27:11.022602  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:27:11.022625  355293 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:27:11.034599  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:11.034621  355293 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:27:11.046622  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1221 20:27:09.610037  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:27:12.110288  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:27:12.977615  355293 node_ready.go:49] node "default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:12.977667  355293 node_ready.go:38] duration metric: took 2.072442361s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:27:12.977685  355293 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:12.977831  355293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:13.589060  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664212034s)
	I1221 20:27:13.589105  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.641740556s)
	I1221 20:27:13.589236  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.542570549s)
	I1221 20:27:13.589304  355293 api_server.go:72] duration metric: took 2.844588927s to wait for apiserver process to appear ...
	I1221 20:27:13.589365  355293 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:13.589385  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:13.590939  355293 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-766361 addons enable metrics-server
	
	I1221 20:27:13.594212  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:13.594241  355293 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:13.599341  355293 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1221 20:27:11.977348  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:11.977379  356149 ubuntu.go:182] provisioning hostname "newest-cni-734511"
	I1221 20:27:11.977454  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:11.999751  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:11.999976  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:11.999994  356149 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-734511 && echo "newest-cni-734511" | sudo tee /etc/hostname
	I1221 20:27:12.157144  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:12.157257  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.179924  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:12.180242  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:12.180272  356149 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-734511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-734511/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-734511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:12.325486  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:12.325514  356149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:12.325536  356149 ubuntu.go:190] setting up certificates
	I1221 20:27:12.325549  356149 provision.go:84] configureAuth start
	I1221 20:27:12.325622  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:12.346791  356149 provision.go:143] copyHostCerts
	I1221 20:27:12.346858  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:12.346870  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:12.346953  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:12.347063  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:12.347077  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:12.347117  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:12.347205  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:12.347216  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:12.347269  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:12.347357  356149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.newest-cni-734511 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-734511]
	I1221 20:27:12.416614  356149 provision.go:177] copyRemoteCerts
	I1221 20:27:12.416685  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:12.416736  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.438322  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:12.547462  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:12.566972  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:12.584445  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1221 20:27:12.602292  356149 provision.go:87] duration metric: took 276.731864ms to configureAuth
	I1221 20:27:12.602317  356149 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:12.602481  356149 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:12.602570  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.628085  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:12.628416  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:12.628446  356149 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:27:12.963462  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:12.963499  356149 machine.go:97] duration metric: took 4.148788477s to provisionDockerMachine
	I1221 20:27:12.963511  356149 client.go:176] duration metric: took 8.620635665s to LocalClient.Create
	I1221 20:27:12.963527  356149 start.go:167] duration metric: took 8.620693811s to libmachine.API.Create "newest-cni-734511"
	I1221 20:27:12.963536  356149 start.go:293] postStartSetup for "newest-cni-734511" (driver="docker")
	I1221 20:27:12.963549  356149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:12.963616  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:12.963661  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.994720  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.106837  356149 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:13.112217  356149 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:13.112284  356149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:13.112297  356149 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:13.112360  356149 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:13.112453  356149 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:13.112574  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:13.123914  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:13.152209  356149 start.go:296] duration metric: took 188.649352ms for postStartSetup
	I1221 20:27:13.152586  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:13.174145  356149 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:13.174476  356149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:13.174533  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.195734  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.296538  356149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:13.301216  356149 start.go:128] duration metric: took 8.960783247s to createHost
	I1221 20:27:13.301259  356149 start.go:83] releasing machines lock for "newest-cni-734511", held for 8.96090932s
	I1221 20:27:13.301374  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:13.323173  356149 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:13.323205  356149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:13.323244  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.323280  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.346513  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.347201  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.456203  356149 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:13.536683  356149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:13.585062  356149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:13.590455  356149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:13.590524  356149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:13.622114  356149 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 20:27:13.622139  356149 start.go:496] detecting cgroup driver to use...
	I1221 20:27:13.622174  356149 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:13.622272  356149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:13.639104  356149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:13.651381  356149 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:13.651453  356149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:13.667983  356149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:13.685002  356149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:13.775846  356149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:13.866075  356149 docker.go:234] disabling docker service ...
	I1221 20:27:13.866146  356149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:13.884898  356149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:13.897846  356149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:14.008693  356149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:14.106719  356149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:14.123351  356149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:14.141529  356149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:14.141589  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.153526  356149 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:14.153582  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.164449  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.173423  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.182016  356149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:14.190302  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.198806  356149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.212456  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.221521  356149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:14.228570  356149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:14.235738  356149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:14.317556  356149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:14.455679  356149 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:14.455753  356149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:14.459940  356149 start.go:564] Will wait 60s for crictl version
	I1221 20:27:14.459986  356149 ssh_runner.go:195] Run: which crictl
	I1221 20:27:14.463397  356149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:14.489140  356149 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:14.489245  356149 ssh_runner.go:195] Run: crio --version
	I1221 20:27:14.517363  356149 ssh_runner.go:195] Run: crio --version
	I1221 20:27:14.546070  356149 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1221 20:27:14.547316  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:14.565561  356149 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:14.569784  356149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:14.581403  356149 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1221 20:27:13.608430  349045 pod_ready.go:94] pod "coredns-66bc5c9577-lvwlf" is "Ready"
	I1221 20:27:13.608466  349045 pod_ready.go:86] duration metric: took 34.004349297s for pod "coredns-66bc5c9577-lvwlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.611841  349045 pod_ready.go:83] waiting for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.616529  349045 pod_ready.go:94] pod "etcd-embed-certs-413073" is "Ready"
	I1221 20:27:13.616554  349045 pod_ready.go:86] duration metric: took 4.687623ms for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.618652  349045 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.622524  349045 pod_ready.go:94] pod "kube-apiserver-embed-certs-413073" is "Ready"
	I1221 20:27:13.622543  349045 pod_ready.go:86] duration metric: took 3.869908ms for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.624168  349045 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.809151  349045 pod_ready.go:94] pod "kube-controller-manager-embed-certs-413073" is "Ready"
	I1221 20:27:13.809190  349045 pod_ready.go:86] duration metric: took 184.998965ms for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.007416  349045 pod_ready.go:83] waiting for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.407581  349045 pod_ready.go:94] pod "kube-proxy-qvdzm" is "Ready"
	I1221 20:27:14.407613  349045 pod_ready.go:86] duration metric: took 400.166324ms for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.607762  349045 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:15.007654  349045 pod_ready.go:94] pod "kube-scheduler-embed-certs-413073" is "Ready"
	I1221 20:27:15.007680  349045 pod_ready.go:86] duration metric: took 399.898068ms for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:15.007693  349045 pod_ready.go:40] duration metric: took 35.406275565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:15.061539  349045 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:27:15.063682  349045 out.go:179] * Done! kubectl is now configured to use "embed-certs-413073" cluster and "default" namespace by default
	I1221 20:27:13.600450  355293 addons.go:530] duration metric: took 2.85570077s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:27:14.089929  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:14.094849  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:14.094882  355293 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:14.590379  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:14.595270  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1221 20:27:14.596370  355293 api_server.go:141] control plane version: v1.34.3
	I1221 20:27:14.596406  355293 api_server.go:131] duration metric: took 1.007034338s to wait for apiserver health ...
	I1221 20:27:14.596417  355293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:14.600490  355293 system_pods.go:59] 8 kube-system pods found
	I1221 20:27:14.600533  355293 system_pods.go:61] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:27:14.600546  355293 system_pods.go:61] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:14.600559  355293 system_pods.go:61] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:27:14.600568  355293 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:14.600578  355293 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:14.600589  355293 system_pods.go:61] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:27:14.600597  355293 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:14.600605  355293 system_pods.go:61] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:27:14.600612  355293 system_pods.go:74] duration metric: took 4.188527ms to wait for pod list to return data ...
	I1221 20:27:14.600623  355293 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:14.602947  355293 default_sa.go:45] found service account: "default"
	I1221 20:27:14.602965  355293 default_sa.go:55] duration metric: took 2.335405ms for default service account to be created ...
	I1221 20:27:14.602975  355293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:27:14.605791  355293 system_pods.go:86] 8 kube-system pods found
	I1221 20:27:14.605823  355293 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:27:14.605839  355293 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:14.605850  355293 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:27:14.605863  355293 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:14.605874  355293 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:14.605882  355293 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:27:14.605892  355293 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:14.605900  355293 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:27:14.605908  355293 system_pods.go:126] duration metric: took 2.927241ms to wait for k8s-apps to be running ...
	I1221 20:27:14.605918  355293 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:27:14.605963  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:14.620737  355293 system_svc.go:56] duration metric: took 14.812436ms WaitForService to wait for kubelet
	I1221 20:27:14.620764  355293 kubeadm.go:587] duration metric: took 3.876051255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:27:14.620781  355293 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:14.623820  355293 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:14.623845  355293 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:14.623864  355293 node_conditions.go:105] duration metric: took 3.074979ms to run NodePressure ...
	I1221 20:27:14.623875  355293 start.go:242] waiting for startup goroutines ...
	I1221 20:27:14.623883  355293 start.go:247] waiting for cluster config update ...
	I1221 20:27:14.623893  355293 start.go:256] writing updated cluster config ...
	I1221 20:27:14.624149  355293 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:14.627869  355293 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:14.631173  355293 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:27:16.635807  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:14.582532  356149 kubeadm.go:884] updating cluster {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:14.582720  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:14.582775  356149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:14.616339  356149 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:14.616358  356149 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:14.616398  356149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:14.642742  356149 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:14.642760  356149 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:14.642767  356149 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1221 20:27:14.642856  356149 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-734511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:14.642923  356149 ssh_runner.go:195] Run: crio config
	I1221 20:27:14.689043  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:14.689070  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:14.689084  356149 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1221 20:27:14.689105  356149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-734511 NodeName:newest-cni-734511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:14.689219  356149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-734511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:14.689291  356149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1221 20:27:14.697326  356149 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:14.697381  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:14.705127  356149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:27:14.717405  356149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1221 20:27:14.731759  356149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 20:27:14.743893  356149 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:14.747260  356149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:14.756571  356149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:14.836363  356149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:14.861551  356149 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511 for IP: 192.168.76.2
	I1221 20:27:14.861572  356149 certs.go:195] generating shared ca certs ...
	I1221 20:27:14.861586  356149 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.861730  356149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:14.861776  356149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:14.861786  356149 certs.go:257] generating profile certs ...
	I1221 20:27:14.861838  356149 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key
	I1221 20:27:14.861851  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt with IP's: []
	I1221 20:27:14.969695  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt ...
	I1221 20:27:14.969723  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt: {Name:mk9873aa49abf1e0c21b43fa4eeaac6bd3e5af6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.969891  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key ...
	I1221 20:27:14.969903  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key: {Name:mk54cfa5fdd535a853df99958b13c9506ad5bf8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.969977  356149 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303
	I1221 20:27:14.969991  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1221 20:27:15.023559  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 ...
	I1221 20:27:15.023594  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303: {Name:mkeb8aae65e03e7f80ec0f686fed9ea06cda0c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.023783  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303 ...
	I1221 20:27:15.023802  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303: {Name:mk3d23054258bc709f78fde53bfd58ad79495c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.023909  356149 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt
	I1221 20:27:15.024018  356149 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key
	I1221 20:27:15.024108  356149 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key
	I1221 20:27:15.024137  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt with IP's: []
	I1221 20:27:15.238672  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt ...
	I1221 20:27:15.238700  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt: {Name:mk12ceb8fec2627da1e23919a8ad1b2d47c85a1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.238872  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key ...
	I1221 20:27:15.238890  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key: {Name:mk350b0a8872a865f49a834064f6447e0f7240cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.239094  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:15.239147  356149 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:15.239163  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:15.239199  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:15.239246  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:15.239281  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:15.239343  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:15.239918  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:15.257758  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:15.274862  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:15.292146  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:15.309413  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1221 20:27:15.328072  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:15.349778  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:15.369272  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:15.389257  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:15.409819  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:15.429531  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:15.446818  356149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:15.458998  356149 ssh_runner.go:195] Run: openssl version
	I1221 20:27:15.465312  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.472913  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:15.480737  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.484301  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.484353  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.520431  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:15.528644  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 20:27:15.536038  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.544064  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:15.551906  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.555536  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.555579  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.591848  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:15.599139  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12711.pem /etc/ssl/certs/51391683.0
	I1221 20:27:15.606610  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.613779  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:15.620972  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.625110  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.625149  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.660450  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:15.667624  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127112.pem /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:15.674835  356149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:15.678595  356149 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 20:27:15.678651  356149 kubeadm.go:401] StartCluster: {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:15.678723  356149 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:15.678765  356149 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:15.708139  356149 cri.go:96] found id: ""
	I1221 20:27:15.708254  356149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:15.717705  356149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 20:27:15.726595  356149 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 20:27:15.726664  356149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 20:27:15.735640  356149 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 20:27:15.735658  356149 kubeadm.go:158] found existing configuration files:
	
	I1221 20:27:15.735693  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 20:27:15.743487  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 20:27:15.743528  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 20:27:15.750424  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 20:27:15.757426  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 20:27:15.757476  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 20:27:15.764200  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 20:27:15.771497  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 20:27:15.771543  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 20:27:15.778713  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 20:27:15.786060  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 20:27:15.786104  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 20:27:15.793154  356149 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 20:27:15.895321  356149 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 20:27:15.954184  356149 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1221 20:27:18.637834  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:21.137485  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:23.057253  356149 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1221 20:27:23.057342  356149 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 20:27:23.057464  356149 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1221 20:27:23.057536  356149 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1221 20:27:23.057581  356149 kubeadm.go:319] OS: Linux
	I1221 20:27:23.057656  356149 kubeadm.go:319] CGROUPS_CPU: enabled
	I1221 20:27:23.057734  356149 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1221 20:27:23.057805  356149 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1221 20:27:23.057892  356149 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1221 20:27:23.057979  356149 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1221 20:27:23.058048  356149 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1221 20:27:23.058117  356149 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1221 20:27:23.058158  356149 kubeadm.go:319] CGROUPS_IO: enabled
	I1221 20:27:23.058281  356149 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 20:27:23.058392  356149 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 20:27:23.058543  356149 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 20:27:23.058644  356149 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 20:27:23.069304  356149 out.go:252]   - Generating certificates and keys ...
	I1221 20:27:23.069398  356149 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 20:27:23.069491  356149 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 20:27:23.069583  356149 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 20:27:23.069664  356149 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 20:27:23.069745  356149 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 20:27:23.069835  356149 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 20:27:23.069903  356149 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 20:27:23.070063  356149 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-734511] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1221 20:27:23.070146  356149 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 20:27:23.070332  356149 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-734511] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1221 20:27:23.070450  356149 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 20:27:23.070543  356149 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 20:27:23.070613  356149 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 20:27:23.070693  356149 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 20:27:23.070773  356149 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 20:27:23.070851  356149 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 20:27:23.070934  356149 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 20:27:23.071032  356149 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 20:27:23.071140  356149 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 20:27:23.071282  356149 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 20:27:23.071375  356149 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 20:27:23.075423  356149 out.go:252]   - Booting up control plane ...
	I1221 20:27:23.075551  356149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 20:27:23.075648  356149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 20:27:23.075736  356149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 20:27:23.075906  356149 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 20:27:23.076043  356149 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 20:27:23.076213  356149 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 20:27:23.076369  356149 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 20:27:23.076454  356149 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 20:27:23.076645  356149 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 20:27:23.076789  356149 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 20:27:23.076930  356149 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.116041ms
	I1221 20:27:23.077079  356149 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 20:27:23.077215  356149 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1221 20:27:23.077359  356149 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 20:27:23.077495  356149 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 20:27:23.077612  356149 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005819159s
	I1221 20:27:23.077698  356149 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.346286694s
	I1221 20:27:23.077780  356149 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002124897s
	I1221 20:27:23.077914  356149 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 20:27:23.078078  356149 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 20:27:23.078154  356149 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 20:27:23.078439  356149 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-734511 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 20:27:23.078512  356149 kubeadm.go:319] [bootstrap-token] Using token: s2l34i.w3afmswk2s1ke4hl
	I1221 20:27:23.099165  356149 out.go:252]   - Configuring RBAC rules ...
	I1221 20:27:23.099408  356149 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 20:27:23.099549  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 20:27:23.099770  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 20:27:23.099948  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 20:27:23.100117  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 20:27:23.100319  356149 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 20:27:23.100533  356149 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 20:27:23.100614  356149 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1221 20:27:23.100683  356149 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1221 20:27:23.100690  356149 kubeadm.go:319] 
	I1221 20:27:23.100841  356149 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1221 20:27:23.100882  356149 kubeadm.go:319] 
	I1221 20:27:23.100987  356149 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1221 20:27:23.100998  356149 kubeadm.go:319] 
	I1221 20:27:23.101028  356149 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1221 20:27:23.101109  356149 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 20:27:23.101203  356149 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 20:27:23.101244  356149 kubeadm.go:319] 
	I1221 20:27:23.101321  356149 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1221 20:27:23.101339  356149 kubeadm.go:319] 
	I1221 20:27:23.101406  356149 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 20:27:23.101412  356149 kubeadm.go:319] 
	I1221 20:27:23.101618  356149 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1221 20:27:23.101822  356149 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 20:27:23.101924  356149 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 20:27:23.101934  356149 kubeadm.go:319] 
	I1221 20:27:23.102047  356149 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 20:27:23.102190  356149 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1221 20:27:23.102250  356149 kubeadm.go:319] 
	I1221 20:27:23.102358  356149 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s2l34i.w3afmswk2s1ke4hl \
	I1221 20:27:23.102486  356149 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 \
	I1221 20:27:23.102515  356149 kubeadm.go:319] 	--control-plane 
	I1221 20:27:23.102527  356149 kubeadm.go:319] 
	I1221 20:27:23.102630  356149 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1221 20:27:23.102639  356149 kubeadm.go:319] 
	I1221 20:27:23.102762  356149 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s2l34i.w3afmswk2s1ke4hl \
	I1221 20:27:23.102972  356149 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 
	I1221 20:27:23.103002  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:23.103014  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:23.178881  356149 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1221 20:27:23.215628  356149 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 20:27:23.221915  356149 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1221 20:27:23.221937  356149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1221 20:27:23.247115  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 20:27:23.751074  356149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 20:27:23.751155  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:23.751177  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-734511 minikube.k8s.io/updated_at=2025_12_21T20_27_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=newest-cni-734511 minikube.k8s.io/primary=true
	I1221 20:27:23.763199  356149 ops.go:34] apiserver oom_adj: -16
	I1221 20:27:23.858174  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 21 20:26:45 no-preload-328404 crio[571]: time="2025-12-21T20:26:45.998618052Z" level=info msg="Started container" PID=1767 containerID=51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper id=e724546c-ed80-49df-9a20-654712beacd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=250da8813d43ecfce0ead723dbf2a57ad0714de4dfc0ed4d35b89967335e3466
	Dec 21 20:26:46 no-preload-328404 crio[571]: time="2025-12-21T20:26:46.042425346Z" level=info msg="Removing container: 3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227" id=3824e963-9b15-4901-ae90-e6254748dc4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:46 no-preload-328404 crio[571]: time="2025-12-21T20:26:46.055424984Z" level=info msg="Removed container 3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=3824e963-9b15-4901-ae90-e6254748dc4b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.069054767Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8d6de08f-1c24-455b-8197-b8b14f3c4744 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.069983511Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d1a3bf5f-c2db-448b-8010-cc4ac8c15f52 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.07105782Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b2474010-a7cd-44ab-9959-bf11f1e62008 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.071194691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075287072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075491703Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0ca256ac5746b768ed132c6a8c9e6a183d68b5788d7712830f919b20144bb3ac/merged/etc/passwd: no such file or directory"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075525609Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0ca256ac5746b768ed132c6a8c9e6a183d68b5788d7712830f919b20144bb3ac/merged/etc/group: no such file or directory"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.075841569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.111937843Z" level=info msg="Created container c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f: kube-system/storage-provisioner/storage-provisioner" id=b2474010-a7cd-44ab-9959-bf11f1e62008 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.112562491Z" level=info msg="Starting container: c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f" id=05ca49fd-5684-4aa2-bf70-9b2cfc2ef725 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:26:56 no-preload-328404 crio[571]: time="2025-12-21T20:26:56.114436987Z" level=info msg="Started container" PID=1781 containerID=c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f description=kube-system/storage-provisioner/storage-provisioner id=05ca49fd-5684-4aa2-bf70-9b2cfc2ef725 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7c37b55873dbe9cc67a1f2075ae9788058e6e961fdc725f061ede812e459702
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.942779088Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3013ff2d-b0f5-442c-8388-e524c3f8eec7 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.95884792Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ec3ede6d-426c-413b-aa85-17828d597b32 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.95993139Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=db628ac5-c0ab-46d6-85b3-ec03a45e805e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:06 no-preload-328404 crio[571]: time="2025-12-21T20:27:06.960059956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.010596172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.011262728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.155178823Z" level=info msg="Created container 35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=db628ac5-c0ab-46d6-85b3-ec03a45e805e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.155875688Z" level=info msg="Starting container: 35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943" id=1a92ff72-636f-4314-8128-b75a09bf2222 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:07 no-preload-328404 crio[571]: time="2025-12-21T20:27:07.158308746Z" level=info msg="Started container" PID=1817 containerID=35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper id=1a92ff72-636f-4314-8128-b75a09bf2222 name=/runtime.v1.RuntimeService/StartContainer sandboxID=250da8813d43ecfce0ead723dbf2a57ad0714de4dfc0ed4d35b89967335e3466
	Dec 21 20:27:08 no-preload-328404 crio[571]: time="2025-12-21T20:27:08.10381159Z" level=info msg="Removing container: 51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db" id=8f368b88-4b15-4a9f-beb4-70c0c40ab752 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:08 no-preload-328404 crio[571]: time="2025-12-21T20:27:08.216601693Z" level=info msg="Removed container 51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk/dashboard-metrics-scraper" id=8f368b88-4b15-4a9f-beb4-70c0c40ab752 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	35aa8d65c0fad       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   3                   250da8813d43e       dashboard-metrics-scraper-867fb5f87b-dlspk   kubernetes-dashboard
	c4a3bf64a4312       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago       Running             storage-provisioner         1                   c7c37b55873db       storage-provisioner                          kube-system
	bbbb335edc1a3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   53 seconds ago       Running             kubernetes-dashboard        0                   60e40d4d17c83       kubernetes-dashboard-b84665fb8-gndgj         kubernetes-dashboard
	a084a6826d154       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           About a minute ago   Running             coredns                     0                   2eee96b0c663f       coredns-7d764666f9-wkztz                     kube-system
	d9cd4ed4c93bf       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   42a3b2be21ff3       busybox                                      default
	fe09ae4da8b24       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           About a minute ago   Running             kube-proxy                  0                   030032d599aab       kube-proxy-tnpxj                             kube-system
	f04b47e9dcfc5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   c7c37b55873db       storage-provisioner                          kube-system
	3595d41486618       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           About a minute ago   Running             kindnet-cni                 0                   048e502213a22       kindnet-txb2h                                kube-system
	bcac2e4233e07       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        0                   95f7a4db0edb7       etcd-no-preload-328404                       kube-system
	0046d150fd039       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           About a minute ago   Running             kube-apiserver              0                   d5c7c995ad30d       kube-apiserver-no-preload-328404             kube-system
	98be72f58d134       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           About a minute ago   Running             kube-scheduler              0                   60ec47faed9d2       kube-scheduler-no-preload-328404             kube-system
	d787f2902ce77       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           About a minute ago   Running             kube-controller-manager     0                   ac481f4d12bad       kube-controller-manager-no-preload-328404    kube-system
	
	
	==> coredns [a084a6826d154a385bde8864d163a3902fe32cf3e04525a973b1d6149ec59316] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49327 - 57943 "HINFO IN 8871514818096014852.412357642826896072. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.085449586s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-328404
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-328404
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=no-preload-328404
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-328404
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:26:54 +0000   Sun, 21 Dec 2025 20:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-328404
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                1bc220dc-568c-47a3-81e8-8d8a8f6c7b02
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-7d764666f9-wkztz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     116s
	  kube-system                 etcd-no-preload-328404                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-txb2h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-328404              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-328404     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-tnpxj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-328404              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-dlspk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-gndgj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  117s  node-controller  Node no-preload-328404 event: Registered Node no-preload-328404 in Controller
	  Normal  RegisteredNode  58s   node-controller  Node no-preload-328404 event: Registered Node no-preload-328404 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [bcac2e4233e078a1060d7687fd886835bcd161ef64c6969c34d2fca692733dca] <==
	{"level":"info","ts":"2025-12-21T20:26:22.581997Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-21T20:26:22.582383Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-21T20:26:22.582798Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-21T20:26:23.064179Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:23.064265Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:23.064342Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-21T20:26:23.064365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:26:23.064386Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.064882Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.064918Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:26:23.064941Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.064953Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-21T20:26:23.065760Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-328404 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-21T20:26:23.066022Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:23.066314Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:26:23.066392Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:23.066708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:26:23.068205Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:26:23.069356Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:26:23.075116Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-21T20:26:23.075130Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:26:27.836401Z","caller":"traceutil/trace.go:172","msg":"trace[1525521691] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"105.062505ms","start":"2025-12-21T20:26:27.731316Z","end":"2025-12-21T20:26:27.836379Z","steps":["trace[1525521691] 'process raft request'  (duration: 104.827672ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:26:27.854192Z","caller":"traceutil/trace.go:172","msg":"trace[1721586658] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"122.816091ms","start":"2025-12-21T20:26:27.731355Z","end":"2025-12-21T20:26:27.854171Z","steps":["trace[1721586658] 'process raft request'  (duration: 122.732382ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:27:07.298754Z","caller":"traceutil/trace.go:172","msg":"trace[1029984313] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"139.266509ms","start":"2025-12-21T20:27:07.159465Z","end":"2025-12-21T20:27:07.298731Z","steps":["trace[1029984313] 'process raft request'  (duration: 139.138091ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:27:07.450391Z","caller":"traceutil/trace.go:172","msg":"trace[1070794006] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"146.934517ms","start":"2025-12-21T20:27:07.303435Z","end":"2025-12-21T20:27:07.450369Z","steps":["trace[1070794006] 'process raft request'  (duration: 126.441341ms)","trace[1070794006] 'compare'  (duration: 20.385564ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:27:25 up  1:09,  0 user,  load average: 4.51, 3.95, 2.81
	Linux no-preload-328404 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3595d41486618c410928433b6dcd88e3aa2dbd3baaf61cacd454477205ba2b3b] <==
	I1221 20:26:25.555053       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:26:25.555477       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1221 20:26:25.555658       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:26:25.555686       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:26:25.555712       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:26:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:26:25.764948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:26:25.764977       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:26:25.764990       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:26:25.765121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:26:26.065204       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:26:26.065269       1 metrics.go:72] Registering metrics
	I1221 20:26:26.065355       1 controller.go:711] "Syncing nftables rules"
	I1221 20:26:35.765377       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:26:35.765460       1 main.go:301] handling current node
	I1221 20:26:45.768002       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:26:45.768070       1 main.go:301] handling current node
	I1221 20:26:55.765412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:26:55.765451       1 main.go:301] handling current node
	I1221 20:27:05.764995       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:27:05.765036       1 main.go:301] handling current node
	I1221 20:27:15.766331       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1221 20:27:15.766365       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0046d150fd03984c5a267cbb1a42d7e283f30f63ee5bd302b5ebad1dce9150cf] <==
	I1221 20:26:24.205541       1 cache.go:39] Caches are synced for autoregister controller
	I1221 20:26:24.205699       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:24.205749       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1221 20:26:24.205765       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1221 20:26:24.206073       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:24.206106       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1221 20:26:24.206408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:26:24.211190       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1221 20:26:24.212786       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:26:24.218204       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:26:24.258292       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1221 20:26:24.265519       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:24.265540       1 policy_source.go:248] refreshing policies
	I1221 20:26:24.271417       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:26:24.469592       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:26:24.495078       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:26:24.512197       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:26:24.519127       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:26:24.524535       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:26:24.556881       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.163.136"}
	I1221 20:26:24.566781       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.20.190"}
	I1221 20:26:25.109714       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:26:27.730781       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:26:27.840741       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:26:27.858538       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d787f2902ce772055519660b7118e43b95c26d99a74f299380f021e62851e5d2] <==
	I1221 20:26:27.337849       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337798       1 range_allocator.go:177] "Sending events to api server"
	I1221 20:26:27.337906       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337916       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1221 20:26:27.337923       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337934       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337851       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.338055       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.337923       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:27.338133       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.338880       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339308       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339270       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339248       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339279       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.339289       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.340498       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:27.342031       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.343266       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.358094       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.438359       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.438952       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:26:27.438990       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:26:27.440964       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:27.869653       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [fe09ae4da8b24cd8e37c5e7ad994eef35649b944e8c085a4bbe2da7544aa431c] <==
	I1221 20:26:25.356700       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:26:25.441182       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:25.541898       1 shared_informer.go:377] "Caches are synced"
	I1221 20:26:25.541955       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1221 20:26:25.542206       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:26:25.564596       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:26:25.564669       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:26:25.570852       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:26:25.571638       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:26:25.571672       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:25.574557       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:26:25.574736       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:26:25.574643       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:26:25.575360       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:26:25.575385       1 config.go:309] "Starting node config controller"
	I1221 20:26:25.575390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:26:25.575410       1 config.go:200] "Starting service config controller"
	I1221 20:26:25.575422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:26:25.675529       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:26:25.675588       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:26:25.675602       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:26:25.675615       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [98be72f58d13404328401992ab2e7394515b18e5e27627b5c20db8e2982872e6] <==
	I1221 20:26:23.018351       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:26:24.114484       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:26:24.114526       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:26:24.114537       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:26:24.114547       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:26:24.191081       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:26:24.191117       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:24.193860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:24.193904       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:26:24.194038       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:26:24.195058       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:26:24.295027       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 21 20:26:45 no-preload-328404 kubelet[724]: E1221 20:26:45.942614     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:26:45 no-preload-328404 kubelet[724]: I1221 20:26:45.942661     724 scope.go:122] "RemoveContainer" containerID="3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: I1221 20:26:46.040844     724 scope.go:122] "RemoveContainer" containerID="3d6f87597530b468ee2a243966e75fa9b5aabaa7b349ef05d78b3667fd9d1227"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: E1221 20:26:46.041158     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: I1221 20:26:46.041193     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:26:46 no-preload-328404 kubelet[724]: E1221 20:26:46.041437     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:26:52 no-preload-328404 kubelet[724]: E1221 20:26:52.623658     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:26:52 no-preload-328404 kubelet[724]: I1221 20:26:52.623696     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:26:52 no-preload-328404 kubelet[724]: E1221 20:26:52.623856     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:26:56 no-preload-328404 kubelet[724]: I1221 20:26:56.068589     724 scope.go:122] "RemoveContainer" containerID="f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8"
	Dec 21 20:27:04 no-preload-328404 kubelet[724]: E1221 20:27:04.554516     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wkztz" containerName="coredns"
	Dec 21 20:27:06 no-preload-328404 kubelet[724]: E1221 20:27:06.942283     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:27:06 no-preload-328404 kubelet[724]: I1221 20:27:06.942319     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: I1221 20:27:08.102442     724 scope.go:122] "RemoveContainer" containerID="51752adebcca73a1ad50954f812b25abaf14f275a05913f961f4685c85e826db"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: E1221 20:27:08.102637     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: I1221 20:27:08.102667     724 scope.go:122] "RemoveContainer" containerID="35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	Dec 21 20:27:08 no-preload-328404 kubelet[724]: E1221 20:27:08.102850     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:27:12 no-preload-328404 kubelet[724]: E1221 20:27:12.623407     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" containerName="dashboard-metrics-scraper"
	Dec 21 20:27:12 no-preload-328404 kubelet[724]: I1221 20:27:12.623466     724 scope.go:122] "RemoveContainer" containerID="35aa8d65c0fadce5bec49da66a4f22754ddc74c6fd6da3b86311d2e6c0b7d943"
	Dec 21 20:27:12 no-preload-328404 kubelet[724]: E1221 20:27:12.624077     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-dlspk_kubernetes-dashboard(97806fe0-950d-4487-9d9c-d523eea98e5a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-dlspk" podUID="97806fe0-950d-4487-9d9c-d523eea98e5a"
	Dec 21 20:27:19 no-preload-328404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:19 no-preload-328404 kubelet[724]: I1221 20:27:19.438112     724 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 21 20:27:19 no-preload-328404 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:19 no-preload-328404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:27:19 no-preload-328404 systemd[1]: kubelet.service: Consumed 1.846s CPU time.
	
	
	==> kubernetes-dashboard [bbbb335edc1a37bba1da0a6728be1871809e0281aea068022ebe44b162ab9011] <==
	2025/12/21 20:26:31 Starting overwatch
	2025/12/21 20:26:31 Using namespace: kubernetes-dashboard
	2025/12/21 20:26:31 Using in-cluster config to connect to apiserver
	2025/12/21 20:26:31 Using secret token for csrf signing
	2025/12/21 20:26:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:26:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:26:31 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/21 20:26:31 Generating JWE encryption key
	2025/12/21 20:26:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:26:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:26:31 Initializing JWE encryption key from synchronized object
	2025/12/21 20:26:31 Creating in-cluster Sidecar client
	2025/12/21 20:26:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:31 Serving insecurely on HTTP port: 9090
	2025/12/21 20:27:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c4a3bf64a43120217b40dd24afcb1af936c1f147b792cee0b45d9b17fa5b207f] <==
	W1221 20:26:56.135349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:26:59.589850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:03.849822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:07.451493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:10.505057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.527604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.532497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:13.532719       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:27:13.532904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-328404_cb32c0ce-62c8-47c8-b0d3-fabaa2857f9f!
	I1221 20:27:13.532900       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf33a741-9273-4d62-a26d-92d41502a937", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-328404_cb32c0ce-62c8-47c8-b0d3-fabaa2857f9f became leader
	W1221 20:27:13.535428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.541029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:13.633171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-328404_cb32c0ce-62c8-47c8-b0d3-fabaa2857f9f!
	W1221 20:27:15.543997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:15.547901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:17.551881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:17.556688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:19.559956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:19.563856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:21.567399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:21.572350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:23.574804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:23.643215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:25.646047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:25.649919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f04b47e9dcfc58a2156f303c8a4990ce5245587dc05ac87618bd8526092ed3d8] <==
	I1221 20:26:25.314084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:26:55.318683       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328404 -n no-preload-328404
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328404 -n no-preload-328404: exit status 2 (323.136113ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-328404 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-413073 --alsologtostderr -v=1
E1221 20:27:27.202373   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-413073 --alsologtostderr -v=1: exit status 80 (2.571927721s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-413073 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:27:26.788239  362420 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:26.788461  362420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:26.788469  362420 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:26.788473  362420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:26.788707  362420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:26.788985  362420 out.go:368] Setting JSON to false
	I1221 20:27:26.789010  362420 mustload.go:66] Loading cluster: embed-certs-413073
	I1221 20:27:26.789527  362420 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:26.790069  362420 cli_runner.go:164] Run: docker container inspect embed-certs-413073 --format={{.State.Status}}
	I1221 20:27:26.808833  362420 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:27:26.809178  362420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:26.887152  362420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-21 20:27:26.87585004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:26.887807  362420 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-413073 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotifi
cation:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1221 20:27:26.889554  362420 out.go:179] * Pausing node embed-certs-413073 ... 
	I1221 20:27:26.890621  362420 host.go:66] Checking if "embed-certs-413073" exists ...
	I1221 20:27:26.890933  362420 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:26.890987  362420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-413073
	I1221 20:27:26.911832  362420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/embed-certs-413073/id_rsa Username:docker}
	I1221 20:27:27.011863  362420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:27.023444  362420 pause.go:52] kubelet running: true
	I1221 20:27:27.023499  362420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:27.184913  362420 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:27.185028  362420 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:27.249713  362420 cri.go:96] found id: "8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11"
	I1221 20:27:27.249736  362420 cri.go:96] found id: "f45b5864907aa7149d8ee89baa80fc30eab0bb305e569c2ab4e60b6cbb776361"
	I1221 20:27:27.249740  362420 cri.go:96] found id: "53c5617d6f7e51a46e8338a47451a359d269f12003165d179d08ca6a5eba2222"
	I1221 20:27:27.249743  362420 cri.go:96] found id: "61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81"
	I1221 20:27:27.249746  362420 cri.go:96] found id: "adec13e6a9730c9c2014cce01c3ad44cb3cefafe029c7c1fc5a41b1514b28262"
	I1221 20:27:27.249749  362420 cri.go:96] found id: "020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce"
	I1221 20:27:27.249753  362420 cri.go:96] found id: "9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676"
	I1221 20:27:27.249755  362420 cri.go:96] found id: "c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536"
	I1221 20:27:27.249758  362420 cri.go:96] found id: "d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f"
	I1221 20:27:27.249765  362420 cri.go:96] found id: "2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	I1221 20:27:27.249770  362420 cri.go:96] found id: "ae6a90080b1cc970c35c86eb3fe253112c1113429e74eaa6f47b141f0680007c"
	I1221 20:27:27.249773  362420 cri.go:96] found id: ""
	I1221 20:27:27.249817  362420 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:27.261645  362420 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:27Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:27.441089  362420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:27.457063  362420 pause.go:52] kubelet running: false
	I1221 20:27:27.457122  362420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:27.670724  362420 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:27.670905  362420 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:27.755680  362420 cri.go:96] found id: "8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11"
	I1221 20:27:27.755707  362420 cri.go:96] found id: "f45b5864907aa7149d8ee89baa80fc30eab0bb305e569c2ab4e60b6cbb776361"
	I1221 20:27:27.755712  362420 cri.go:96] found id: "53c5617d6f7e51a46e8338a47451a359d269f12003165d179d08ca6a5eba2222"
	I1221 20:27:27.755718  362420 cri.go:96] found id: "61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81"
	I1221 20:27:27.755723  362420 cri.go:96] found id: "adec13e6a9730c9c2014cce01c3ad44cb3cefafe029c7c1fc5a41b1514b28262"
	I1221 20:27:27.755728  362420 cri.go:96] found id: "020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce"
	I1221 20:27:27.755733  362420 cri.go:96] found id: "9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676"
	I1221 20:27:27.755737  362420 cri.go:96] found id: "c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536"
	I1221 20:27:27.755742  362420 cri.go:96] found id: "d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f"
	I1221 20:27:27.755759  362420 cri.go:96] found id: "2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	I1221 20:27:27.755763  362420 cri.go:96] found id: "ae6a90080b1cc970c35c86eb3fe253112c1113429e74eaa6f47b141f0680007c"
	I1221 20:27:27.755767  362420 cri.go:96] found id: ""
	I1221 20:27:27.755811  362420 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:28.312119  362420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:28.327119  362420 pause.go:52] kubelet running: false
	I1221 20:27:28.327174  362420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:28.501006  362420 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:28.501087  362420 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:28.574359  362420 cri.go:96] found id: "8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11"
	I1221 20:27:28.574385  362420 cri.go:96] found id: "f45b5864907aa7149d8ee89baa80fc30eab0bb305e569c2ab4e60b6cbb776361"
	I1221 20:27:28.574390  362420 cri.go:96] found id: "53c5617d6f7e51a46e8338a47451a359d269f12003165d179d08ca6a5eba2222"
	I1221 20:27:28.574395  362420 cri.go:96] found id: "61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81"
	I1221 20:27:28.574399  362420 cri.go:96] found id: "adec13e6a9730c9c2014cce01c3ad44cb3cefafe029c7c1fc5a41b1514b28262"
	I1221 20:27:28.574405  362420 cri.go:96] found id: "020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce"
	I1221 20:27:28.574409  362420 cri.go:96] found id: "9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676"
	I1221 20:27:28.574414  362420 cri.go:96] found id: "c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536"
	I1221 20:27:28.574419  362420 cri.go:96] found id: "d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f"
	I1221 20:27:28.574427  362420 cri.go:96] found id: "2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	I1221 20:27:28.574431  362420 cri.go:96] found id: "ae6a90080b1cc970c35c86eb3fe253112c1113429e74eaa6f47b141f0680007c"
	I1221 20:27:28.574436  362420 cri.go:96] found id: ""
	I1221 20:27:28.574489  362420 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:28.935443  362420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:28.963945  362420 pause.go:52] kubelet running: false
	I1221 20:27:28.963998  362420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:29.168532  362420 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:29.168614  362420 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:29.262518  362420 cri.go:96] found id: "8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11"
	I1221 20:27:29.262544  362420 cri.go:96] found id: "f45b5864907aa7149d8ee89baa80fc30eab0bb305e569c2ab4e60b6cbb776361"
	I1221 20:27:29.262550  362420 cri.go:96] found id: "53c5617d6f7e51a46e8338a47451a359d269f12003165d179d08ca6a5eba2222"
	I1221 20:27:29.262555  362420 cri.go:96] found id: "61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81"
	I1221 20:27:29.262559  362420 cri.go:96] found id: "adec13e6a9730c9c2014cce01c3ad44cb3cefafe029c7c1fc5a41b1514b28262"
	I1221 20:27:29.262563  362420 cri.go:96] found id: "020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce"
	I1221 20:27:29.262568  362420 cri.go:96] found id: "9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676"
	I1221 20:27:29.262581  362420 cri.go:96] found id: "c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536"
	I1221 20:27:29.262590  362420 cri.go:96] found id: "d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f"
	I1221 20:27:29.262600  362420 cri.go:96] found id: "2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	I1221 20:27:29.262605  362420 cri.go:96] found id: "ae6a90080b1cc970c35c86eb3fe253112c1113429e74eaa6f47b141f0680007c"
	I1221 20:27:29.262610  362420 cri.go:96] found id: ""
	I1221 20:27:29.262655  362420 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:29.283193  362420 out.go:203] 
	W1221 20:27:29.284392  362420 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 20:27:29.284421  362420 out.go:285] * 
	* 
	W1221 20:27:29.291495  362420 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 20:27:29.292606  362420 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-413073 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-413073
helpers_test.go:244: (dbg) docker inspect embed-certs-413073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9",
	        "Created": "2025-12-21T20:25:22.363216828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 349281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:26:28.534664362Z",
	            "FinishedAt": "2025-12-21T20:26:27.2024561Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/hostname",
	        "HostsPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/hosts",
	        "LogPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9-json.log",
	        "Name": "/embed-certs-413073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-413073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-413073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9",
	                "LowerDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-413073",
	                "Source": "/var/lib/docker/volumes/embed-certs-413073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-413073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-413073",
	                "name.minikube.sigs.k8s.io": "embed-certs-413073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "256b43767c51ae50db980ec129cbbf894f7d356bf074cf7f2f9f805c4c345b78",
	            "SandboxKey": "/var/run/docker/netns/256b43767c51",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-413073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4158e54948a98ff7a88de94749c8958f71f898a500c109dd7a967015c32451c6",
	                    "EndpointID": "a53944a1e4562e5b2b8c26efa52fbca610e8402cc2df3d5e5478ed91683f7a43",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ea:d8:33:33:81:70",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-413073",
	                        "885ba42913bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073: exit status 2 (410.6579ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-413073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-413073 logs -n 25: (1.360284484s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-766361 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ old-k8s-version-699289 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ embed-certs-413073 image list --format=json                                                                                                                                                                                                        │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p embed-certs-413073 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                             │ test-preload-dl-gcs-162834   │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:29.814144  363941 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:29.814484  363941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:29.814498  363941 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:29.814505  363941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:29.814802  363941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:29.815398  363941 out.go:368] Setting JSON to false
	I1221 20:27:29.817402  363941 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4199,"bootTime":1766344651,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:29.817469  363941 start.go:143] virtualization: kvm guest
	I1221 20:27:29.819813  363941 out.go:179] * [test-preload-dl-gcs-162834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:29.821270  363941 notify.go:221] Checking for updates...
	I1221 20:27:29.821346  363941 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:29.822764  363941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:29.823992  363941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:29.825133  363941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:29.826282  363941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:29.828386  363941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Dec 21 20:26:55 embed-certs-413073 crio[574]: time="2025-12-21T20:26:55.17901749Z" level=info msg="Started container" PID=1786 containerID=62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper id=a3ca6161-8ec8-4cb5-b225-5baf30f71fa7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4f9e03592d32116cf290a8b9c9de62dc0f34461fd7d6e618376349ac586028
	Dec 21 20:26:56 embed-certs-413073 crio[574]: time="2025-12-21T20:26:56.057965228Z" level=info msg="Removing container: 00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30" id=417115b4-122c-45a6-8b68-d5a2d679d7a5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:56 embed-certs-413073 crio[574]: time="2025-12-21T20:26:56.067220759Z" level=info msg="Removed container 00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=417115b4-122c-45a6-8b68-d5a2d679d7a5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.093343935Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5dc018e8-4377-405d-ba44-ef5ca4039e92 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.094206588Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61b0c176-1448-4d56-8c31-3ae956639d4e name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.095188993Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=38663f25-0dde-4586-8170-69b7e4b0ca74 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.095393308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.099583053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.099787066Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/44de77a3d9d8bf65f6a4d31a9f153c598292265f9d19716d07360106152f1861/merged/etc/passwd: no such file or directory"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.099819363Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/44de77a3d9d8bf65f6a4d31a9f153c598292265f9d19716d07360106152f1861/merged/etc/group: no such file or directory"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.100069709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.124201157Z" level=info msg="Created container 8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11: kube-system/storage-provisioner/storage-provisioner" id=38663f25-0dde-4586-8170-69b7e4b0ca74 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.124820877Z" level=info msg="Starting container: 8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11" id=f33ca9ed-9270-4dab-a485-f1c31a46f344 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.126783328Z" level=info msg="Started container" PID=1800 containerID=8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11 description=kube-system/storage-provisioner/storage-provisioner id=f33ca9ed-9270-4dab-a485-f1c31a46f344 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bb098dfff0e0ff39f45656ca49be303b53410e66e2b10b03d4e60eadaa76e79
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.985193092Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f675276f-edd7-4438-b95e-2b5cbe7208c3 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.986551232Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d177745-9460-4f28-bfe9-529231cb0dc0 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.98783741Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=9e986690-d26b-4fb5-9665-19235de6d606 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.987978451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.996161044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.996886943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.030040476Z" level=info msg="Created container 2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=9e986690-d26b-4fb5-9665-19235de6d606 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.030736278Z" level=info msg="Starting container: 2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8" id=24a6373d-ceb8-470e-a48a-c5ecf9e3c819 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.032791764Z" level=info msg="Started container" PID=1836 containerID=2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper id=24a6373d-ceb8-470e-a48a-c5ecf9e3c819 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4f9e03592d32116cf290a8b9c9de62dc0f34461fd7d6e618376349ac586028
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.114727558Z" level=info msg="Removing container: 62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6" id=07623220-2551-4e48-8620-11fea9a98b31 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.124757293Z" level=info msg="Removed container 62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=07623220-2551-4e48-8620-11fea9a98b31 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2c58a04d839d9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   3                   3e4f9e03592d3       dashboard-metrics-scraper-6ffb444bf9-bh865   kubernetes-dashboard
	8d28d1177e2b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   2bb098dfff0e0       storage-provisioner                          kube-system
	ae6a90080b1cc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   79164e5fe5402       kubernetes-dashboard-855c9754f9-mxshr        kubernetes-dashboard
	37580f5da34a0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   e59411b887ec2       busybox                                      default
	f45b5864907aa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   df0b53d6fbded       coredns-66bc5c9577-lvwlf                     kube-system
	53c5617d6f7e5       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   22e3cd34cb740       kindnet-qnfsx                                kube-system
	61b826608670a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   2bb098dfff0e0       storage-provisioner                          kube-system
	adec13e6a9730       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           51 seconds ago      Running             kube-proxy                  0                   d145947283747       kube-proxy-qvdzm                             kube-system
	020459e2a9f09       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   6aae4e5824a49       etcd-embed-certs-413073                      kube-system
	9830572fe0b45       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           54 seconds ago      Running             kube-apiserver              0                   87bf7a9ac8b1c       kube-apiserver-embed-certs-413073            kube-system
	c22f69d01095f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           54 seconds ago      Running             kube-scheduler              0                   a987d165e918c       kube-scheduler-embed-certs-413073            kube-system
	d06de390e7ce1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           54 seconds ago      Running             kube-controller-manager     0                   d3aed11fb704b       kube-controller-manager-embed-certs-413073   kube-system
	
	
	==> coredns [f45b5864907aa7149d8ee89baa80fc30eab0bb305e569c2ab4e60b6cbb776361] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60498 - 59054 "HINFO IN 8195386547535733228.7615412157056173379. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.089319072s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-413073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-413073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=embed-certs-413073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-413073
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-413073
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                c4a4cac5-f7ed-43b3-8fd7-2b463810496e
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-lvwlf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-413073                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-qnfsx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-413073             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-413073    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-qvdzm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-413073             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bh865    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mxshr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  115s (x8 over 116s)  kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 116s)  kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 116s)  kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-413073 event: Registered Node embed-certs-413073 in Controller
	  Normal  NodeReady                92s                  kubelet          Node embed-certs-413073 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)    kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)    kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)    kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node embed-certs-413073 event: Registered Node embed-certs-413073 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce] <==
	{"level":"warn","ts":"2025-12-21T20:26:37.424025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.432493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.440742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.446867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.453371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.459729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.466033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.474551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.482647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.490338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.505791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.513285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.521148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.527738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.534207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.541613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.547904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.554514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.562116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.569810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.576146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.592414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.600789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.607025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.662316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:27:30 up  1:09,  0 user,  load average: 4.71, 4.00, 2.84
	Linux embed-certs-413073 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [53c5617d6f7e51a46e8338a47451a359d269f12003165d179d08ca6a5eba2222] <==
	I1221 20:26:39.564281       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:26:39.594355       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1221 20:26:39.594531       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:26:39.594558       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:26:39.594576       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:26:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:26:39.795624       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:26:39.795683       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:26:39.795696       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:26:39.795845       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:26:40.195801       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:26:40.195915       1 metrics.go:72] Registering metrics
	I1221 20:26:40.196022       1 controller.go:711] "Syncing nftables rules"
	I1221 20:26:49.796246       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:26:49.796315       1 main.go:301] handling current node
	I1221 20:26:59.797316       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:26:59.797371       1 main.go:301] handling current node
	I1221 20:27:09.796249       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:27:09.796296       1 main.go:301] handling current node
	I1221 20:27:19.796733       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:27:19.796766       1 main.go:301] handling current node
	I1221 20:27:29.799302       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:27:29.799349       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676] <==
	I1221 20:26:38.121789       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1221 20:26:38.121881       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1221 20:26:38.121848       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:26:38.121850       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 20:26:38.122353       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 20:26:38.123727       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1221 20:26:38.123797       1 aggregator.go:171] initial CRD sync complete...
	I1221 20:26:38.123808       1 autoregister_controller.go:144] Starting autoregister controller
	I1221 20:26:38.123814       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 20:26:38.123820       1 cache.go:39] Caches are synced for autoregister controller
	E1221 20:26:38.128357       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:26:38.129128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 20:26:38.141535       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:26:38.144714       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 20:26:38.371395       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:26:38.395324       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:26:38.410012       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:26:38.416891       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:26:38.422117       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:26:38.453650       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.243.252"}
	I1221 20:26:38.461687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.47.190"}
	I1221 20:26:39.024174       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:26:41.649828       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:26:41.899176       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:26:41.949420       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f] <==
	I1221 20:26:41.446355       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1221 20:26:41.446418       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1221 20:26:41.446453       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:26:41.446639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:26:41.446654       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 20:26:41.446661       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 20:26:41.446868       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 20:26:41.447091       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1221 20:26:41.447103       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1221 20:26:41.447275       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 20:26:41.448407       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 20:26:41.448593       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1221 20:26:41.449756       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1221 20:26:41.451596       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 20:26:41.452746       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 20:26:41.452797       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 20:26:41.452830       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 20:26:41.452836       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 20:26:41.452842       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 20:26:41.453904       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 20:26:41.455103       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1221 20:26:41.457440       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:26:41.457459       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1221 20:26:41.469626       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1221 20:26:41.478143       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [adec13e6a9730c9c2014cce01c3ad44cb3cefafe029c7c1fc5a41b1514b28262] <==
	I1221 20:26:39.374557       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:26:39.441857       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:26:39.542570       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:26:39.542604       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1221 20:26:39.542691       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:26:39.562196       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:26:39.562266       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:26:39.567488       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:26:39.567835       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:26:39.567876       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:39.569411       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:26:39.569501       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:26:39.569474       1 config.go:200] "Starting service config controller"
	I1221 20:26:39.569878       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:26:39.569849       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:26:39.570185       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:26:39.570365       1 config.go:309] "Starting node config controller"
	I1221 20:26:39.571115       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:26:39.571181       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:26:39.669969       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:26:39.669978       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:26:39.671116       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536] <==
	I1221 20:26:37.158576       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:26:38.036500       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:26:38.036565       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:26:38.036578       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:26:38.036588       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:26:38.061239       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 20:26:38.062078       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:38.065622       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:38.065706       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:38.066706       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:26:38.066794       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:26:38.166283       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:26:43 embed-certs-413073 kubelet[735]: I1221 20:26:43.525638     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 21 20:26:45 embed-certs-413073 kubelet[735]: I1221 20:26:45.025182     735 scope.go:117] "RemoveContainer" containerID="bf0d3fe340164b2b50c3e5dafa344e41e97a297126f6d1996a29a9ed1219d832"
	Dec 21 20:26:46 embed-certs-413073 kubelet[735]: I1221 20:26:46.030624     735 scope.go:117] "RemoveContainer" containerID="bf0d3fe340164b2b50c3e5dafa344e41e97a297126f6d1996a29a9ed1219d832"
	Dec 21 20:26:46 embed-certs-413073 kubelet[735]: I1221 20:26:46.030806     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:46 embed-certs-413073 kubelet[735]: E1221 20:26:46.031008     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:26:47 embed-certs-413073 kubelet[735]: I1221 20:26:47.035352     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:47 embed-certs-413073 kubelet[735]: E1221 20:26:47.035580     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:26:48 embed-certs-413073 kubelet[735]: I1221 20:26:48.050623     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mxshr" podStartSLOduration=0.412522775 podStartE2EDuration="6.050598586s" podCreationTimestamp="2025-12-21 20:26:42 +0000 UTC" firstStartedPulling="2025-12-21 20:26:42.34629344 +0000 UTC m=+6.448093402" lastFinishedPulling="2025-12-21 20:26:47.984369253 +0000 UTC m=+12.086169213" observedRunningTime="2025-12-21 20:26:48.050274887 +0000 UTC m=+12.152074876" watchObservedRunningTime="2025-12-21 20:26:48.050598586 +0000 UTC m=+12.152398555"
	Dec 21 20:26:55 embed-certs-413073 kubelet[735]: I1221 20:26:55.136657     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:56 embed-certs-413073 kubelet[735]: I1221 20:26:56.056726     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:56 embed-certs-413073 kubelet[735]: I1221 20:26:56.056925     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:26:56 embed-certs-413073 kubelet[735]: E1221 20:26:56.057123     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:05 embed-certs-413073 kubelet[735]: I1221 20:27:05.136972     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:27:05 embed-certs-413073 kubelet[735]: E1221 20:27:05.137214     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:10 embed-certs-413073 kubelet[735]: I1221 20:27:10.092952     735 scope.go:117] "RemoveContainer" containerID="61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81"
	Dec 21 20:27:16 embed-certs-413073 kubelet[735]: I1221 20:27:16.984597     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:27:17 embed-certs-413073 kubelet[735]: I1221 20:27:17.113421     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:27:17 embed-certs-413073 kubelet[735]: I1221 20:27:17.113626     735 scope.go:117] "RemoveContainer" containerID="2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	Dec 21 20:27:17 embed-certs-413073 kubelet[735]: E1221 20:27:17.113860     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:25 embed-certs-413073 kubelet[735]: I1221 20:27:25.136316     735 scope.go:117] "RemoveContainer" containerID="2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	Dec 21 20:27:25 embed-certs-413073 kubelet[735]: E1221 20:27:25.136593     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: kubelet.service: Consumed 1.612s CPU time.
	
	
	==> kubernetes-dashboard [ae6a90080b1cc970c35c86eb3fe253112c1113429e74eaa6f47b141f0680007c] <==
	2025/12/21 20:26:48 Using namespace: kubernetes-dashboard
	2025/12/21 20:26:48 Using in-cluster config to connect to apiserver
	2025/12/21 20:26:48 Using secret token for csrf signing
	2025/12/21 20:26:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:26:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:26:48 Successful initial request to the apiserver, version: v1.34.3
	2025/12/21 20:26:48 Generating JWE encryption key
	2025/12/21 20:26:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:26:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:26:48 Initializing JWE encryption key from synchronized object
	2025/12/21 20:26:48 Creating in-cluster Sidecar client
	2025/12/21 20:26:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:48 Serving insecurely on HTTP port: 9090
	2025/12/21 20:27:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:48 Starting overwatch
	
	
	==> storage-provisioner [61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81] <==
	I1221 20:26:39.348901       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:27:09.351590       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11] <==
	I1221 20:27:10.139389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:27:10.146986       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:27:10.147032       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:27:10.149475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.604619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:17.865489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:21.465172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:24.519759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:27.542706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:27.550136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:27.550389       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:27:27.550519       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce2740a9-39c8-4989-95c5-9081eeb21fd3", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-413073_385d883f-eb7b-4032-8966-ac48260aeb10 became leader
	I1221 20:27:27.550588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-413073_385d883f-eb7b-4032-8966-ac48260aeb10!
	W1221 20:27:27.554987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:27.562163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:27.651569       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-413073_385d883f-eb7b-4032-8966-ac48260aeb10!
	W1221 20:27:29.567927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:29.572695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413073 -n embed-certs-413073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413073 -n embed-certs-413073: exit status 2 (356.7052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-413073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-413073
helpers_test.go:244: (dbg) docker inspect embed-certs-413073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9",
	        "Created": "2025-12-21T20:25:22.363216828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 349281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:26:28.534664362Z",
	            "FinishedAt": "2025-12-21T20:26:27.2024561Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/hostname",
	        "HostsPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/hosts",
	        "LogPath": "/var/lib/docker/containers/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9/885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9-json.log",
	        "Name": "/embed-certs-413073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-413073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-413073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "885ba42913bf831e18d3f9dad92ea5dc1afdd4d51dcbf0038a3664ec5ab7fef9",
	                "LowerDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e86fabc16871f7bf68829e38d956c6f1b781ff7ab7f37ffa80a8f845d563fe9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-413073",
	                "Source": "/var/lib/docker/volumes/embed-certs-413073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-413073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-413073",
	                "name.minikube.sigs.k8s.io": "embed-certs-413073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "256b43767c51ae50db980ec129cbbf894f7d356bf074cf7f2f9f805c4c345b78",
	            "SandboxKey": "/var/run/docker/netns/256b43767c51",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-413073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4158e54948a98ff7a88de94749c8958f71f898a500c109dd7a967015c32451c6",
	                    "EndpointID": "a53944a1e4562e5b2b8c26efa52fbca610e8402cc2df3d5e5478ed91683f7a43",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ea:d8:33:33:81:70",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-413073",
	                        "885ba42913bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073: exit status 2 (379.565963ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-413073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-413073 logs -n 25: (1.117688811s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-766361 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ old-k8s-version-699289 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ embed-certs-413073 image list --format=json                                                                                                                                                                                                        │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p embed-certs-413073 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                             │ test-preload-dl-gcs-162834   │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ stop    │ -p newest-cni-734511 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:29.814144  363941 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:29.814484  363941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:29.814498  363941 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:29.814505  363941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:29.814802  363941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:29.815398  363941 out.go:368] Setting JSON to false
	I1221 20:27:29.817402  363941 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4199,"bootTime":1766344651,"procs":378,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:29.817469  363941 start.go:143] virtualization: kvm guest
	I1221 20:27:29.819813  363941 out.go:179] * [test-preload-dl-gcs-162834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:29.821270  363941 notify.go:221] Checking for updates...
	I1221 20:27:29.821346  363941 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:29.822764  363941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:29.823992  363941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:29.825133  363941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:29.826282  363941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:29.828386  363941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:29.830556  363941 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:29.830699  363941 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:29.830820  363941 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:29.830953  363941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:29.861527  363941 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:29.861703  363941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:29.953209  363941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:29.939121744 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:29.953366  363941 docker.go:319] overlay module found
	I1221 20:27:29.955154  363941 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:29.956624  363941 start.go:309] selected driver: docker
	I1221 20:27:29.956642  363941 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:29.956738  363941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:30.031664  363941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:30.019373797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:30.031898  363941 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 20:27:30.032640  363941 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 20:27:30.032855  363941 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 20:27:30.034232  363941 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:30.037893  363941 cni.go:84] Creating CNI manager for ""
	I1221 20:27:30.037983  363941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:30.037997  363941 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:30.038081  363941 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-162834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.1 ClusterName:test-preload-dl-gcs-162834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:30.039405  363941 out.go:179] * Starting "test-preload-dl-gcs-162834" primary control-plane node in "test-preload-dl-gcs-162834" cluster
	I1221 20:27:30.040526  363941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:30.041767  363941 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:30.042889  363941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.1 and runtime crio
	I1221 20:27:30.042992  363941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:30.063470  363941 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0-rc.1/preloaded-images-k8s-v18-v1.34.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:30.063501  363941 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:30.063720  363941 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.1 and runtime crio
	I1221 20:27:30.065441  363941 out.go:179] * Downloading Kubernetes v1.34.0-rc.1 preload ...
	W1221 20:27:28.136324  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:30.137250  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 21 20:26:55 embed-certs-413073 crio[574]: time="2025-12-21T20:26:55.17901749Z" level=info msg="Started container" PID=1786 containerID=62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper id=a3ca6161-8ec8-4cb5-b225-5baf30f71fa7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4f9e03592d32116cf290a8b9c9de62dc0f34461fd7d6e618376349ac586028
	Dec 21 20:26:56 embed-certs-413073 crio[574]: time="2025-12-21T20:26:56.057965228Z" level=info msg="Removing container: 00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30" id=417115b4-122c-45a6-8b68-d5a2d679d7a5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:26:56 embed-certs-413073 crio[574]: time="2025-12-21T20:26:56.067220759Z" level=info msg="Removed container 00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=417115b4-122c-45a6-8b68-d5a2d679d7a5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.093343935Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5dc018e8-4377-405d-ba44-ef5ca4039e92 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.094206588Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61b0c176-1448-4d56-8c31-3ae956639d4e name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.095188993Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=38663f25-0dde-4586-8170-69b7e4b0ca74 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.095393308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.099583053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.099787066Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/44de77a3d9d8bf65f6a4d31a9f153c598292265f9d19716d07360106152f1861/merged/etc/passwd: no such file or directory"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.099819363Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/44de77a3d9d8bf65f6a4d31a9f153c598292265f9d19716d07360106152f1861/merged/etc/group: no such file or directory"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.100069709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.124201157Z" level=info msg="Created container 8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11: kube-system/storage-provisioner/storage-provisioner" id=38663f25-0dde-4586-8170-69b7e4b0ca74 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.124820877Z" level=info msg="Starting container: 8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11" id=f33ca9ed-9270-4dab-a485-f1c31a46f344 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:10 embed-certs-413073 crio[574]: time="2025-12-21T20:27:10.126783328Z" level=info msg="Started container" PID=1800 containerID=8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11 description=kube-system/storage-provisioner/storage-provisioner id=f33ca9ed-9270-4dab-a485-f1c31a46f344 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bb098dfff0e0ff39f45656ca49be303b53410e66e2b10b03d4e60eadaa76e79
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.985193092Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f675276f-edd7-4438-b95e-2b5cbe7208c3 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.986551232Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d177745-9460-4f28-bfe9-529231cb0dc0 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.98783741Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=9e986690-d26b-4fb5-9665-19235de6d606 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.987978451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.996161044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:16 embed-certs-413073 crio[574]: time="2025-12-21T20:27:16.996886943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.030040476Z" level=info msg="Created container 2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=9e986690-d26b-4fb5-9665-19235de6d606 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.030736278Z" level=info msg="Starting container: 2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8" id=24a6373d-ceb8-470e-a48a-c5ecf9e3c819 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.032791764Z" level=info msg="Started container" PID=1836 containerID=2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper id=24a6373d-ceb8-470e-a48a-c5ecf9e3c819 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e4f9e03592d32116cf290a8b9c9de62dc0f34461fd7d6e618376349ac586028
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.114727558Z" level=info msg="Removing container: 62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6" id=07623220-2551-4e48-8620-11fea9a98b31 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:17 embed-certs-413073 crio[574]: time="2025-12-21T20:27:17.124757293Z" level=info msg="Removed container 62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865/dashboard-metrics-scraper" id=07623220-2551-4e48-8620-11fea9a98b31 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2c58a04d839d9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   3                   3e4f9e03592d3       dashboard-metrics-scraper-6ffb444bf9-bh865   kubernetes-dashboard
	8d28d1177e2b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   2bb098dfff0e0       storage-provisioner                          kube-system
	ae6a90080b1cc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   79164e5fe5402       kubernetes-dashboard-855c9754f9-mxshr        kubernetes-dashboard
	37580f5da34a0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   e59411b887ec2       busybox                                      default
	f45b5864907aa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   df0b53d6fbded       coredns-66bc5c9577-lvwlf                     kube-system
	53c5617d6f7e5       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           53 seconds ago      Running             kindnet-cni                 0                   22e3cd34cb740       kindnet-qnfsx                                kube-system
	61b826608670a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   2bb098dfff0e0       storage-provisioner                          kube-system
	adec13e6a9730       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           53 seconds ago      Running             kube-proxy                  0                   d145947283747       kube-proxy-qvdzm                             kube-system
	020459e2a9f09       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   6aae4e5824a49       etcd-embed-certs-413073                      kube-system
	9830572fe0b45       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           56 seconds ago      Running             kube-apiserver              0                   87bf7a9ac8b1c       kube-apiserver-embed-certs-413073            kube-system
	c22f69d01095f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           56 seconds ago      Running             kube-scheduler              0                   a987d165e918c       kube-scheduler-embed-certs-413073            kube-system
	d06de390e7ce1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           56 seconds ago      Running             kube-controller-manager     0                   d3aed11fb704b       kube-controller-manager-embed-certs-413073   kube-system
	
	
	==> coredns [f45b5864907aa7149d8ee89baa80fc30eab0bb305e569c2ab4e60b6cbb776361] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60498 - 59054 "HINFO IN 8195386547535733228.7615412157056173379. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.089319072s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-413073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-413073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=embed-certs-413073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_25_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:25:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-413073
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:27:08 +0000   Sun, 21 Dec 2025 20:25:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-413073
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                c4a4cac5-f7ed-43b3-8fd7-2b463810496e
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-lvwlf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-413073                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-qnfsx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-413073             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-413073    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-qvdzm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-413073             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bh865    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mxshr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  117s (x8 over 118s)  kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 118s)  kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 118s)  kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node embed-certs-413073 event: Registered Node embed-certs-413073 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-413073 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)    kubelet          Node embed-certs-413073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)    kubelet          Node embed-certs-413073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)    kubelet          Node embed-certs-413073 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node embed-certs-413073 event: Registered Node embed-certs-413073 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [020459e2a9f09b965e88471eaa0ab65d6a8fec21868b994468e4f4f05e4cdbce] <==
	{"level":"warn","ts":"2025-12-21T20:26:37.424025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.432493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.440742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.446867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.453371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.459729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.466033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.474551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.482647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.490338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.505791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.513285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.521148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.527738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.534207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.541613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.547904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.554514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.562116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.569810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.576146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.592414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.600789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.607025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:26:37.662316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:27:32 up  1:10,  0 user,  load average: 4.49, 3.97, 2.83
	Linux embed-certs-413073 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [53c5617d6f7e51a46e8338a47451a359d269f12003165d179d08ca6a5eba2222] <==
	I1221 20:26:39.564281       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:26:39.594355       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1221 20:26:39.594531       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:26:39.594558       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:26:39.594576       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:26:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:26:39.795624       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:26:39.795683       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:26:39.795696       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:26:39.795845       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:26:40.195801       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:26:40.195915       1 metrics.go:72] Registering metrics
	I1221 20:26:40.196022       1 controller.go:711] "Syncing nftables rules"
	I1221 20:26:49.796246       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:26:49.796315       1 main.go:301] handling current node
	I1221 20:26:59.797316       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:26:59.797371       1 main.go:301] handling current node
	I1221 20:27:09.796249       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:27:09.796296       1 main.go:301] handling current node
	I1221 20:27:19.796733       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:27:19.796766       1 main.go:301] handling current node
	I1221 20:27:29.799302       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1221 20:27:29.799349       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9830572fe0b45d426b58c094c403ce5d9fb75c44efd83e4f44b7080d83a2d676] <==
	I1221 20:26:38.121789       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1221 20:26:38.121881       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1221 20:26:38.121848       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:26:38.121850       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1221 20:26:38.122353       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 20:26:38.123727       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1221 20:26:38.123797       1 aggregator.go:171] initial CRD sync complete...
	I1221 20:26:38.123808       1 autoregister_controller.go:144] Starting autoregister controller
	I1221 20:26:38.123814       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 20:26:38.123820       1 cache.go:39] Caches are synced for autoregister controller
	E1221 20:26:38.128357       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:26:38.129128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 20:26:38.141535       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:26:38.144714       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 20:26:38.371395       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:26:38.395324       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:26:38.410012       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:26:38.416891       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:26:38.422117       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:26:38.453650       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.243.252"}
	I1221 20:26:38.461687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.47.190"}
	I1221 20:26:39.024174       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:26:41.649828       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:26:41.899176       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:26:41.949420       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d06de390e7ce1e0ab4ce9110861456a5d243aaf8e721686da3bc143cc4ea3d2f] <==
	I1221 20:26:41.446355       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1221 20:26:41.446418       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1221 20:26:41.446453       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:26:41.446639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:26:41.446654       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 20:26:41.446661       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 20:26:41.446868       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1221 20:26:41.447091       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1221 20:26:41.447103       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1221 20:26:41.447275       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 20:26:41.448407       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1221 20:26:41.448593       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1221 20:26:41.449756       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1221 20:26:41.451596       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 20:26:41.452746       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 20:26:41.452797       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 20:26:41.452830       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 20:26:41.452836       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 20:26:41.452842       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 20:26:41.453904       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1221 20:26:41.455103       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1221 20:26:41.457440       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:26:41.457459       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1221 20:26:41.469626       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1221 20:26:41.478143       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [adec13e6a9730c9c2014cce01c3ad44cb3cefafe029c7c1fc5a41b1514b28262] <==
	I1221 20:26:39.374557       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:26:39.441857       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:26:39.542570       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:26:39.542604       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1221 20:26:39.542691       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:26:39.562196       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:26:39.562266       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:26:39.567488       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:26:39.567835       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:26:39.567876       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:39.569411       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:26:39.569501       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:26:39.569474       1 config.go:200] "Starting service config controller"
	I1221 20:26:39.569878       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:26:39.569849       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:26:39.570185       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:26:39.570365       1 config.go:309] "Starting node config controller"
	I1221 20:26:39.571115       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:26:39.571181       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:26:39.669969       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:26:39.669978       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:26:39.671116       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c22f69d01095f1f22412b0ea5f3062f1707e81fac3154063e833a6cfc1cae536] <==
	I1221 20:26:37.158576       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:26:38.036500       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:26:38.036565       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:26:38.036578       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:26:38.036588       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:26:38.061239       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 20:26:38.062078       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:26:38.065622       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:38.065706       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:26:38.066706       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:26:38.066794       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:26:38.166283       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:26:43 embed-certs-413073 kubelet[735]: I1221 20:26:43.525638     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 21 20:26:45 embed-certs-413073 kubelet[735]: I1221 20:26:45.025182     735 scope.go:117] "RemoveContainer" containerID="bf0d3fe340164b2b50c3e5dafa344e41e97a297126f6d1996a29a9ed1219d832"
	Dec 21 20:26:46 embed-certs-413073 kubelet[735]: I1221 20:26:46.030624     735 scope.go:117] "RemoveContainer" containerID="bf0d3fe340164b2b50c3e5dafa344e41e97a297126f6d1996a29a9ed1219d832"
	Dec 21 20:26:46 embed-certs-413073 kubelet[735]: I1221 20:26:46.030806     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:46 embed-certs-413073 kubelet[735]: E1221 20:26:46.031008     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:26:47 embed-certs-413073 kubelet[735]: I1221 20:26:47.035352     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:47 embed-certs-413073 kubelet[735]: E1221 20:26:47.035580     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:26:48 embed-certs-413073 kubelet[735]: I1221 20:26:48.050623     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mxshr" podStartSLOduration=0.412522775 podStartE2EDuration="6.050598586s" podCreationTimestamp="2025-12-21 20:26:42 +0000 UTC" firstStartedPulling="2025-12-21 20:26:42.34629344 +0000 UTC m=+6.448093402" lastFinishedPulling="2025-12-21 20:26:47.984369253 +0000 UTC m=+12.086169213" observedRunningTime="2025-12-21 20:26:48.050274887 +0000 UTC m=+12.152074876" watchObservedRunningTime="2025-12-21 20:26:48.050598586 +0000 UTC m=+12.152398555"
	Dec 21 20:26:55 embed-certs-413073 kubelet[735]: I1221 20:26:55.136657     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:56 embed-certs-413073 kubelet[735]: I1221 20:26:56.056726     735 scope.go:117] "RemoveContainer" containerID="00e93158c39cb49f740b5529eb4bd87a885924c58b288812cf6490068ea72f30"
	Dec 21 20:26:56 embed-certs-413073 kubelet[735]: I1221 20:26:56.056925     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:26:56 embed-certs-413073 kubelet[735]: E1221 20:26:56.057123     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:05 embed-certs-413073 kubelet[735]: I1221 20:27:05.136972     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:27:05 embed-certs-413073 kubelet[735]: E1221 20:27:05.137214     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:10 embed-certs-413073 kubelet[735]: I1221 20:27:10.092952     735 scope.go:117] "RemoveContainer" containerID="61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81"
	Dec 21 20:27:16 embed-certs-413073 kubelet[735]: I1221 20:27:16.984597     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:27:17 embed-certs-413073 kubelet[735]: I1221 20:27:17.113421     735 scope.go:117] "RemoveContainer" containerID="62c7dadbbe03c2fc60b7a8b27bad24ac49f7d927c187d8d9ce49ff3816b348b6"
	Dec 21 20:27:17 embed-certs-413073 kubelet[735]: I1221 20:27:17.113626     735 scope.go:117] "RemoveContainer" containerID="2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	Dec 21 20:27:17 embed-certs-413073 kubelet[735]: E1221 20:27:17.113860     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:25 embed-certs-413073 kubelet[735]: I1221 20:27:25.136316     735 scope.go:117] "RemoveContainer" containerID="2c58a04d839d9343fb71ada7f47ff601bad4afb39aa8c8a85ac9d4c59ce68ef8"
	Dec 21 20:27:25 embed-certs-413073 kubelet[735]: E1221 20:27:25.136593     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bh865_kubernetes-dashboard(2f7a44ae-1e89-4166-94ff-4a35be3867de)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bh865" podUID="2f7a44ae-1e89-4166-94ff-4a35be3867de"
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:27:27 embed-certs-413073 systemd[1]: kubelet.service: Consumed 1.612s CPU time.
	
	
	==> kubernetes-dashboard [ae6a90080b1cc970c35c86eb3fe253112c1113429e74eaa6f47b141f0680007c] <==
	2025/12/21 20:26:48 Starting overwatch
	2025/12/21 20:26:48 Using namespace: kubernetes-dashboard
	2025/12/21 20:26:48 Using in-cluster config to connect to apiserver
	2025/12/21 20:26:48 Using secret token for csrf signing
	2025/12/21 20:26:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:26:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:26:48 Successful initial request to the apiserver, version: v1.34.3
	2025/12/21 20:26:48 Generating JWE encryption key
	2025/12/21 20:26:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:26:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:26:48 Initializing JWE encryption key from synchronized object
	2025/12/21 20:26:48 Creating in-cluster Sidecar client
	2025/12/21 20:26:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:26:48 Serving insecurely on HTTP port: 9090
	2025/12/21 20:27:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [61b826608670a5bf7806284e4383cf267544b916ba8d88f800e4ec145035af81] <==
	I1221 20:26:39.348901       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:27:09.351590       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8d28d1177e2b2d69f59c32c4f1b99fa895359b1c7b3683736d95287471824e11] <==
	I1221 20:27:10.139389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:27:10.146986       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:27:10.147032       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:27:10.149475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:13.604619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:17.865489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:21.465172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:24.519759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:27.542706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:27.550136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:27.550389       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:27:27.550519       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce2740a9-39c8-4989-95c5-9081eeb21fd3", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-413073_385d883f-eb7b-4032-8966-ac48260aeb10 became leader
	I1221 20:27:27.550588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-413073_385d883f-eb7b-4032-8966-ac48260aeb10!
	W1221 20:27:27.554987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:27.562163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:27:27.651569       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-413073_385d883f-eb7b-4032-8966-ac48260aeb10!
	W1221 20:27:29.567927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:29.572695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:31.576680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:31.585103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413073 -n embed-certs-413073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-413073 -n embed-certs-413073: exit status 2 (317.697448ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-413073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.185081ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-734511
helpers_test.go:244: (dbg) docker inspect newest-cni-734511:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca",
	        "Created": "2025-12-21T20:27:08.312566365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357088,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:27:08.349061826Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/hostname",
	        "HostsPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/hosts",
	        "LogPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca-json.log",
	        "Name": "/newest-cni-734511",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-734511:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-734511",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca",
	                "LowerDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-734511",
	                "Source": "/var/lib/docker/volumes/newest-cni-734511/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-734511",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-734511",
	                "name.minikube.sigs.k8s.io": "newest-cni-734511",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8288912a37d4b84f8a418c4cb10630348f5b98b1924eeccb4396832694d6c83a",
	            "SandboxKey": "/var/run/docker/netns/8288912a37d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-734511": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14816134e98be2c6f9635a0cd5947ae7aa1c8333188fd4c39e01a9672f929d75",
	                    "EndpointID": "9ca785a259908d03cbcb1bbf42cbbc13c483a7a7f731f9ed4c5ecb0cc14b9ab7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1e:72:3d:41:f9:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-734511",
	                        "f11eda59f7a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-734511 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-734511 logs -n 25: (1.20504306s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p no-preload-328404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │                     │
	│ stop    │ -p no-preload-328404 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:25 UTC │
	│ start   │ -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:25 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-413073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p embed-certs-413073 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ addons  │ enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ start   │ -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-766361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-766361 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ old-k8s-version-699289 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │ 21 Dec 25 20:26 UTC │
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289       │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ image   │ embed-certs-413073 image list --format=json                                                                                                                                                                                                        │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p embed-certs-413073 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-413073           │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-734511            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:04.161028  356149 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:04.161303  356149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:04.161311  356149 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:04.161315  356149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:04.161505  356149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:04.161969  356149 out.go:368] Setting JSON to false
	I1221 20:27:04.163121  356149 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4173,"bootTime":1766344651,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:04.163191  356149 start.go:143] virtualization: kvm guest
	I1221 20:27:04.165113  356149 out.go:179] * [newest-cni-734511] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:04.166326  356149 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:04.166322  356149 notify.go:221] Checking for updates...
	I1221 20:27:04.168489  356149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:04.169743  356149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:04.170878  356149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:04.171920  356149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:04.172960  356149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:04.174444  356149 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:04.174550  356149 config.go:182] Loaded profile config "embed-certs-413073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:04.174656  356149 config.go:182] Loaded profile config "no-preload-328404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:04.174752  356149 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:04.199923  356149 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:04.200099  356149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:04.255148  356149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:04.245163223 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:04.255317  356149 docker.go:319] overlay module found
	I1221 20:27:04.256944  356149 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:04.258122  356149 start.go:309] selected driver: docker
	I1221 20:27:04.258135  356149 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:04.258146  356149 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:27:04.258746  356149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:04.313188  356149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-21 20:27:04.304012682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:04.313409  356149 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	W1221 20:27:04.313445  356149 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1221 20:27:04.313719  356149 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:04.315617  356149 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:04.316685  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:04.316752  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:04.316769  356149 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:04.316847  356149 start.go:353] cluster config:
	{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:04.318044  356149 out.go:179] * Starting "newest-cni-734511" primary control-plane node in "newest-cni-734511" cluster
	I1221 20:27:04.319025  356149 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:04.319999  356149 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:04.320951  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:04.320986  356149 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:04.320999  356149 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:04.321043  356149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:04.321074  356149 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:04.321084  356149 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1221 20:27:04.321164  356149 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:04.321181  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json: {Name:mka6cda6f0218fe0b8ed835e73384be1466cd914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:04.340148  356149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:04.340164  356149 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:27:04.340186  356149 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:04.340217  356149 start.go:360] acquireMachinesLock for newest-cni-734511: {Name:mk73e51f1f54bba023ba70ceb2589863fd06b9dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:27:04.340337  356149 start.go:364] duration metric: took 80.745µs to acquireMachinesLock for "newest-cni-734511"
	I1221 20:27:04.340360  356149 start.go:93] Provisioning new machine with config: &{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:04.340419  356149 start.go:125] createHost starting for "" (driver="docker")
	W1221 20:27:00.711936  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	W1221 20:27:03.210810  345910 pod_ready.go:104] pod "coredns-7d764666f9-wkztz" is not "Ready", error: <nil>
	I1221 20:27:04.712597  345910 pod_ready.go:94] pod "coredns-7d764666f9-wkztz" is "Ready"
	I1221 20:27:04.712638  345910 pod_ready.go:86] duration metric: took 39.007284258s for pod "coredns-7d764666f9-wkztz" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.715404  345910 pod_ready.go:83] waiting for pod "etcd-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.719865  345910 pod_ready.go:94] pod "etcd-no-preload-328404" is "Ready"
	I1221 20:27:04.719886  345910 pod_ready.go:86] duration metric: took 4.454533ms for pod "etcd-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.722758  345910 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.726749  345910 pod_ready.go:94] pod "kube-apiserver-no-preload-328404" is "Ready"
	I1221 20:27:04.726768  345910 pod_ready.go:86] duration metric: took 3.987664ms for pod "kube-apiserver-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.728754  345910 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:04.909743  345910 pod_ready.go:94] pod "kube-controller-manager-no-preload-328404" is "Ready"
	I1221 20:27:04.909773  345910 pod_ready.go:86] duration metric: took 180.998003ms for pod "kube-controller-manager-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.110503  345910 pod_ready.go:83] waiting for pod "kube-proxy-tnpxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.509849  345910 pod_ready.go:94] pod "kube-proxy-tnpxj" is "Ready"
	I1221 20:27:05.509877  345910 pod_ready.go:86] duration metric: took 399.350496ms for pod "kube-proxy-tnpxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:05.710358  345910 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:06.109831  345910 pod_ready.go:94] pod "kube-scheduler-no-preload-328404" is "Ready"
	I1221 20:27:06.109858  345910 pod_ready.go:86] duration metric: took 399.475178ms for pod "kube-scheduler-no-preload-328404" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:06.109870  345910 pod_ready.go:40] duration metric: took 40.408845738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:06.161975  345910 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:06.166771  345910 out.go:179] * Done! kubectl is now configured to use "no-preload-328404" cluster and "default" namespace by default
	I1221 20:27:01.942630  355293 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-766361" ...
	I1221 20:27:01.942690  355293 cli_runner.go:164] Run: docker start default-k8s-diff-port-766361
	I1221 20:27:02.181766  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:02.200499  355293 kic.go:430] container "default-k8s-diff-port-766361" state is running.
	I1221 20:27:02.200866  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:02.221322  355293 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/config.json ...
	I1221 20:27:02.221536  355293 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:02.221591  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:02.240688  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:02.240957  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:02.240973  355293 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:02.241682  355293 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43750->127.0.0.1:33129: read: connection reset by peer
	I1221 20:27:05.381889  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:27:05.381916  355293 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-766361"
	I1221 20:27:05.381967  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.401135  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:05.401433  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:05.401460  355293 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766361 && echo "default-k8s-diff-port-766361" | sudo tee /etc/hostname
	I1221 20:27:05.555524  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766361
	
	I1221 20:27:05.555604  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.576000  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:05.576357  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:05.576389  355293 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766361/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:05.714615  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:05.714643  355293 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:05.714683  355293 ubuntu.go:190] setting up certificates
	I1221 20:27:05.714693  355293 provision.go:84] configureAuth start
	I1221 20:27:05.714749  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:05.733905  355293 provision.go:143] copyHostCerts
	I1221 20:27:05.734008  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:05.734027  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:05.734108  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:05.734253  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:05.734268  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:05.734313  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:05.734473  355293 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:05.734485  355293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:05.734515  355293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:05.734605  355293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766361 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-766361 localhost minikube]
	I1221 20:27:05.885586  355293 provision.go:177] copyRemoteCerts
	I1221 20:27:05.885657  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:05.885704  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:05.903686  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:06.004376  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:06.022329  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1221 20:27:06.039861  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:06.057192  355293 provision.go:87] duration metric: took 342.475794ms to configureAuth
	I1221 20:27:06.057250  355293 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:06.057479  355293 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:06.057615  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:06.077189  355293 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:06.077572  355293 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1221 20:27:06.077607  355293 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1221 20:27:05.109977  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:27:07.609706  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:27:04.342608  356149 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1221 20:27:04.342833  356149 start.go:159] libmachine.API.Create for "newest-cni-734511" (driver="docker")
	I1221 20:27:04.342865  356149 client.go:173] LocalClient.Create starting
	I1221 20:27:04.342925  356149 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem
	I1221 20:27:04.342953  356149 main.go:144] libmachine: Decoding PEM data...
	I1221 20:27:04.342973  356149 main.go:144] libmachine: Parsing certificate...
	I1221 20:27:04.343034  356149 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem
	I1221 20:27:04.343056  356149 main.go:144] libmachine: Decoding PEM data...
	I1221 20:27:04.343071  356149 main.go:144] libmachine: Parsing certificate...
	I1221 20:27:04.343576  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 20:27:04.359499  356149 cli_runner.go:211] docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 20:27:04.359553  356149 network_create.go:284] running [docker network inspect newest-cni-734511] to gather additional debugging logs...
	I1221 20:27:04.359572  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511
	W1221 20:27:04.375487  356149 cli_runner.go:211] docker network inspect newest-cni-734511 returned with exit code 1
	I1221 20:27:04.375516  356149 network_create.go:287] error running [docker network inspect newest-cni-734511]: docker network inspect newest-cni-734511: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-734511 not found
	I1221 20:27:04.375530  356149 network_create.go:289] output of [docker network inspect newest-cni-734511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-734511 not found
	
	** /stderr **
	I1221 20:27:04.375669  356149 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:04.393047  356149 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f29a930c06e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8b:29:89:af:bd} reservation:<nil>}
	I1221 20:27:04.393765  356149 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ef9486b81b4e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:74:fc:8d:d6:e1} reservation:<nil>}
	I1221 20:27:04.394589  356149 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a8eed82beee6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5a:19:43:42:02:f6} reservation:<nil>}
	I1221 20:27:04.395482  356149 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e58c10}
	I1221 20:27:04.395503  356149 network_create.go:124] attempt to create docker network newest-cni-734511 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1221 20:27:04.395573  356149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-734511 newest-cni-734511
	I1221 20:27:04.440797  356149 network_create.go:108] docker network newest-cni-734511 192.168.76.0/24 created
	I1221 20:27:04.440827  356149 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-734511" container
	I1221 20:27:04.440895  356149 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 20:27:04.457596  356149 cli_runner.go:164] Run: docker volume create newest-cni-734511 --label name.minikube.sigs.k8s.io=newest-cni-734511 --label created_by.minikube.sigs.k8s.io=true
	I1221 20:27:04.474472  356149 oci.go:103] Successfully created a docker volume newest-cni-734511
	I1221 20:27:04.474552  356149 cli_runner.go:164] Run: docker run --rm --name newest-cni-734511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-734511 --entrypoint /usr/bin/test -v newest-cni-734511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
	I1221 20:27:04.874657  356149 oci.go:107] Successfully prepared a docker volume newest-cni-734511
	I1221 20:27:04.874806  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:04.874826  356149 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 20:27:04.874898  356149 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-734511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 20:27:08.234181  356149 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-734511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.359233452s)
	I1221 20:27:08.234217  356149 kic.go:203] duration metric: took 3.359386954s to extract preloaded images to volume ...
	W1221 20:27:08.234353  356149 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1221 20:27:08.234414  356149 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1221 20:27:08.234470  356149 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 20:27:08.295476  356149 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-734511 --name newest-cni-734511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-734511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-734511 --network newest-cni-734511 --ip 192.168.76.2 --volume newest-cni-734511:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
	I1221 20:27:08.565567  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Running}}
	I1221 20:27:08.583983  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.604221  356149 cli_runner.go:164] Run: docker exec newest-cni-734511 stat /var/lib/dpkg/alternatives/iptables
	I1221 20:27:08.654194  356149 oci.go:144] the created container "newest-cni-734511" has a running status.
	I1221 20:27:08.654253  356149 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa...
	I1221 20:27:08.704802  356149 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 20:27:08.732838  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.751273  356149 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 20:27:08.751296  356149 kic_runner.go:114] Args: [docker exec --privileged newest-cni-734511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 20:27:08.793174  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:08.814689  356149 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:08.814784  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:08.835179  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:08.835685  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:08.835721  356149 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:08.836734  356149 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54514->127.0.0.1:33134: read: connection reset by peer
	I1221 20:27:08.318032  355293 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:08.318063  355293 machine.go:97] duration metric: took 6.096511406s to provisionDockerMachine
	I1221 20:27:08.318079  355293 start.go:293] postStartSetup for "default-k8s-diff-port-766361" (driver="docker")
	I1221 20:27:08.318096  355293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:08.318170  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:08.318243  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.339519  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.441820  355293 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:08.446242  355293 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:08.446278  355293 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:08.446291  355293 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:08.446430  355293 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:08.446568  355293 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:08.446699  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:08.454698  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:08.473177  355293 start.go:296] duration metric: took 155.082818ms for postStartSetup
	I1221 20:27:08.473319  355293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:08.473379  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.492373  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.588791  355293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:08.593998  355293 fix.go:56] duration metric: took 6.67202468s for fixHost
	I1221 20:27:08.594026  355293 start.go:83] releasing machines lock for "default-k8s-diff-port-766361", held for 6.672074779s
	I1221 20:27:08.594093  355293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-766361
	I1221 20:27:08.614584  355293 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:08.614626  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.614688  355293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:08.614776  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:08.635066  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.635410  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:08.798479  355293 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:08.805888  355293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:08.851201  355293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:08.857838  355293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:08.857908  355293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:08.869971  355293 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:27:08.869994  355293 start.go:496] detecting cgroup driver to use...
	I1221 20:27:08.870021  355293 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:08.870056  355293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:08.886198  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:08.900320  355293 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:08.900392  355293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:08.916379  355293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:08.929614  355293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:09.017529  355293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:09.102487  355293 docker.go:234] disabling docker service ...
	I1221 20:27:09.102541  355293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:09.117923  355293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:09.130875  355293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:09.210057  355293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:09.290821  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:09.302670  355293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:09.316043  355293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:09.316090  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.324521  355293 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:09.324576  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.332846  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.340926  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.349091  355293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:09.357325  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.366239  355293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.374613  355293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:09.383590  355293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:09.390644  355293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:09.397642  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:09.469485  355293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:09.603676  355293 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:09.603754  355293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:09.608196  355293 start.go:564] Will wait 60s for crictl version
	I1221 20:27:09.608299  355293 ssh_runner.go:195] Run: which crictl
	I1221 20:27:09.611955  355293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:09.635202  355293 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:09.635292  355293 ssh_runner.go:195] Run: crio --version
	I1221 20:27:09.662582  355293 ssh_runner.go:195] Run: crio --version
	I1221 20:27:09.691390  355293 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1221 20:27:09.692632  355293 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-766361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:09.713083  355293 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:09.717679  355293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:09.728452  355293 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:09.728580  355293 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1221 20:27:09.728646  355293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:09.760480  355293 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:09.760502  355293 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:09.760551  355293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:09.786108  355293 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:09.786130  355293 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:09.786137  355293 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.3 crio true true} ...
	I1221 20:27:09.786272  355293 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-766361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:09.786341  355293 ssh_runner.go:195] Run: crio config
	I1221 20:27:09.833071  355293 cni.go:84] Creating CNI manager for ""
	I1221 20:27:09.833099  355293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:09.833112  355293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1221 20:27:09.833133  355293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766361 NodeName:default-k8s-diff-port-766361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:09.833275  355293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766361"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:09.833341  355293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1221 20:27:09.842261  355293 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:09.842317  355293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:09.849946  355293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1221 20:27:09.861851  355293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 20:27:09.873798  355293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1221 20:27:09.886300  355293 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:09.889860  355293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:09.899253  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:09.978391  355293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:10.002606  355293 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361 for IP: 192.168.103.2
	I1221 20:27:10.002626  355293 certs.go:195] generating shared ca certs ...
	I1221 20:27:10.002644  355293 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.002811  355293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:10.002880  355293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:10.002892  355293 certs.go:257] generating profile certs ...
	I1221 20:27:10.003002  355293 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/client.key
	I1221 20:27:10.003076  355293 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key.07b6dc53
	I1221 20:27:10.003131  355293 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key
	I1221 20:27:10.003288  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:10.003336  355293 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:10.003359  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:10.003393  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:10.003426  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:10.003465  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:10.003533  355293 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:10.004374  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:10.023130  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:10.042080  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:10.062135  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:10.085174  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1221 20:27:10.106654  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:10.126596  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:10.145813  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/default-k8s-diff-port-766361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:10.163770  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:10.180292  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:10.198557  355293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:10.214868  355293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:10.226847  355293 ssh_runner.go:195] Run: openssl version
	I1221 20:27:10.233097  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.240743  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:10.248144  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.251615  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.251669  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:10.287002  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:10.294132  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.301357  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:10.308313  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.311705  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.311741  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:10.346268  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:10.353551  355293 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.360546  355293 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:10.367671  355293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.371287  355293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.371336  355293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:10.406685  355293 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:10.413819  355293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:10.417462  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:27:10.454011  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:27:10.488179  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:27:10.533872  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:27:10.576052  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:27:10.629693  355293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:27:10.670862  355293 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-766361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-766361 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:10.670963  355293 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:10.671037  355293 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:10.702259  355293 cri.go:96] found id: "95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7"
	I1221 20:27:10.702282  355293 cri.go:96] found id: "bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6"
	I1221 20:27:10.702285  355293 cri.go:96] found id: "bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0"
	I1221 20:27:10.702295  355293 cri.go:96] found id: "7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f"
	I1221 20:27:10.702298  355293 cri.go:96] found id: ""
	I1221 20:27:10.702339  355293 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:27:10.714908  355293 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:10Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:10.714989  355293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:10.722893  355293 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:27:10.722911  355293 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:27:10.722953  355293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:27:10.730397  355293 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:27:10.731501  355293 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-766361" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:10.732093  355293 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-766361" cluster setting kubeconfig missing "default-k8s-diff-port-766361" context setting]
	I1221 20:27:10.733154  355293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.734776  355293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:27:10.742370  355293 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1221 20:27:10.742398  355293 kubeadm.go:602] duration metric: took 19.480686ms to restartPrimaryControlPlane
	I1221 20:27:10.742407  355293 kubeadm.go:403] duration metric: took 71.557752ms to StartCluster
	I1221 20:27:10.742421  355293 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.742483  355293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:10.744452  355293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:10.744686  355293 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:10.744774  355293 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:10.744878  355293 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744895  355293 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:10.744908  355293 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744913  355293 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-766361"
	I1221 20:27:10.744941  355293 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-766361"
	I1221 20:27:10.744900  355293 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-766361"
	W1221 20:27:10.744955  355293 addons.go:248] addon dashboard should already be in state true
	W1221 20:27:10.744979  355293 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:27:10.744986  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.745018  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.744922  355293 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766361"
	I1221 20:27:10.745404  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.745485  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.745524  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.750065  355293 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:10.751603  355293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:10.771924  355293 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1221 20:27:10.771928  355293 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:10.773031  355293 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:10.773050  355293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:10.773064  355293 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:27:10.773110  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.773127  355293 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-766361"
	W1221 20:27:10.773144  355293 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:27:10.773173  355293 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:10.773700  355293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:10.774627  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:27:10.774645  355293 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:27:10.774701  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.807788  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.809438  355293 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:10.809458  355293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:10.809514  355293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:10.812330  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.832737  355293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:10.891658  355293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:10.905174  355293 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:27:10.923657  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:27:10.923678  355293 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:27:10.924773  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:10.938030  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:27:10.938053  355293 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:27:10.947339  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:10.952101  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:27:10.952123  355293 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:27:10.966725  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:27:10.966747  355293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:27:10.982019  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:27:10.982043  355293 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:27:10.996528  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:27:10.996558  355293 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:27:11.009822  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:27:11.009847  355293 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:27:11.022602  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:27:11.022625  355293 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:27:11.034599  355293 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:11.034621  355293 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:27:11.046622  355293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1221 20:27:09.610037  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	W1221 20:27:12.110288  349045 pod_ready.go:104] pod "coredns-66bc5c9577-lvwlf" is not "Ready", error: <nil>
	I1221 20:27:12.977615  355293 node_ready.go:49] node "default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:12.977667  355293 node_ready.go:38] duration metric: took 2.072442361s for node "default-k8s-diff-port-766361" to be "Ready" ...
	I1221 20:27:12.977685  355293 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:12.977831  355293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:13.589060  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.664212034s)
	I1221 20:27:13.589105  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.641740556s)
	I1221 20:27:13.589236  355293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.542570549s)
	I1221 20:27:13.589304  355293 api_server.go:72] duration metric: took 2.844588927s to wait for apiserver process to appear ...
	I1221 20:27:13.589365  355293 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:13.589385  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:13.590939  355293 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-766361 addons enable metrics-server
	
	I1221 20:27:13.594212  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:13.594241  355293 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:13.599341  355293 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1221 20:27:11.977348  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:11.977379  356149 ubuntu.go:182] provisioning hostname "newest-cni-734511"
	I1221 20:27:11.977454  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:11.999751  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:11.999976  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:11.999994  356149 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-734511 && echo "newest-cni-734511" | sudo tee /etc/hostname
	I1221 20:27:12.157144  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:12.157257  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.179924  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:12.180242  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:12.180272  356149 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-734511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-734511/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-734511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:12.325486  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:12.325514  356149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:12.325536  356149 ubuntu.go:190] setting up certificates
	I1221 20:27:12.325549  356149 provision.go:84] configureAuth start
	I1221 20:27:12.325622  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:12.346791  356149 provision.go:143] copyHostCerts
	I1221 20:27:12.346858  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:12.346870  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:12.346953  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:12.347063  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:12.347077  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:12.347117  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:12.347205  356149 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:12.347216  356149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:12.347269  356149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:12.347357  356149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.newest-cni-734511 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-734511]
	I1221 20:27:12.416614  356149 provision.go:177] copyRemoteCerts
	I1221 20:27:12.416685  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:12.416736  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.438322  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:12.547462  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:12.566972  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:12.584445  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1221 20:27:12.602292  356149 provision.go:87] duration metric: took 276.731864ms to configureAuth
	I1221 20:27:12.602317  356149 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:12.602481  356149 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:12.602570  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.628085  356149 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:12.628416  356149 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1221 20:27:12.628446  356149 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:27:12.963462  356149 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:12.963499  356149 machine.go:97] duration metric: took 4.148788477s to provisionDockerMachine
	I1221 20:27:12.963511  356149 client.go:176] duration metric: took 8.620635665s to LocalClient.Create
	I1221 20:27:12.963527  356149 start.go:167] duration metric: took 8.620693811s to libmachine.API.Create "newest-cni-734511"
	I1221 20:27:12.963536  356149 start.go:293] postStartSetup for "newest-cni-734511" (driver="docker")
	I1221 20:27:12.963549  356149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:12.963616  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:12.963661  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:12.994720  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.106837  356149 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:13.112217  356149 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:13.112284  356149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:13.112297  356149 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:13.112360  356149 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:13.112453  356149 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:13.112574  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:13.123914  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:13.152209  356149 start.go:296] duration metric: took 188.649352ms for postStartSetup
	I1221 20:27:13.152586  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:13.174145  356149 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:13.174476  356149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:13.174533  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.195734  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.296538  356149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:13.301216  356149 start.go:128] duration metric: took 8.960783247s to createHost
	I1221 20:27:13.301259  356149 start.go:83] releasing machines lock for "newest-cni-734511", held for 8.96090932s
	I1221 20:27:13.301374  356149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:13.323173  356149 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:13.323205  356149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:13.323244  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.323280  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:13.346513  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.347201  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:13.456203  356149 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:13.536683  356149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:13.585062  356149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:13.590455  356149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:13.590524  356149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:13.622114  356149 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 20:27:13.622139  356149 start.go:496] detecting cgroup driver to use...
	I1221 20:27:13.622174  356149 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:13.622272  356149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:13.639104  356149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:13.651381  356149 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:13.651453  356149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:13.667983  356149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:13.685002  356149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:13.775846  356149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:13.866075  356149 docker.go:234] disabling docker service ...
	I1221 20:27:13.866146  356149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:13.884898  356149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:13.897846  356149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:14.008693  356149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:14.106719  356149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:14.123351  356149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:14.141529  356149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:14.141589  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.153526  356149 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:14.153582  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.164449  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.173423  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.182016  356149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:14.190302  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.198806  356149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.212456  356149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:14.221521  356149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:14.228570  356149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:14.235738  356149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:14.317556  356149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:14.455679  356149 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:14.455753  356149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:14.459940  356149 start.go:564] Will wait 60s for crictl version
	I1221 20:27:14.459986  356149 ssh_runner.go:195] Run: which crictl
	I1221 20:27:14.463397  356149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:14.489140  356149 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:14.489245  356149 ssh_runner.go:195] Run: crio --version
	I1221 20:27:14.517363  356149 ssh_runner.go:195] Run: crio --version
	I1221 20:27:14.546070  356149 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1221 20:27:14.547316  356149 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:14.565561  356149 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:14.569784  356149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:14.581403  356149 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1221 20:27:13.608430  349045 pod_ready.go:94] pod "coredns-66bc5c9577-lvwlf" is "Ready"
	I1221 20:27:13.608466  349045 pod_ready.go:86] duration metric: took 34.004349297s for pod "coredns-66bc5c9577-lvwlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.611841  349045 pod_ready.go:83] waiting for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.616529  349045 pod_ready.go:94] pod "etcd-embed-certs-413073" is "Ready"
	I1221 20:27:13.616554  349045 pod_ready.go:86] duration metric: took 4.687623ms for pod "etcd-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.618652  349045 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.622524  349045 pod_ready.go:94] pod "kube-apiserver-embed-certs-413073" is "Ready"
	I1221 20:27:13.622543  349045 pod_ready.go:86] duration metric: took 3.869908ms for pod "kube-apiserver-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.624168  349045 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:13.809151  349045 pod_ready.go:94] pod "kube-controller-manager-embed-certs-413073" is "Ready"
	I1221 20:27:13.809190  349045 pod_ready.go:86] duration metric: took 184.998965ms for pod "kube-controller-manager-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.007416  349045 pod_ready.go:83] waiting for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.407581  349045 pod_ready.go:94] pod "kube-proxy-qvdzm" is "Ready"
	I1221 20:27:14.407613  349045 pod_ready.go:86] duration metric: took 400.166324ms for pod "kube-proxy-qvdzm" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:14.607762  349045 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:15.007654  349045 pod_ready.go:94] pod "kube-scheduler-embed-certs-413073" is "Ready"
	I1221 20:27:15.007680  349045 pod_ready.go:86] duration metric: took 399.898068ms for pod "kube-scheduler-embed-certs-413073" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:15.007693  349045 pod_ready.go:40] duration metric: took 35.406275565s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:15.061539  349045 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:27:15.063682  349045 out.go:179] * Done! kubectl is now configured to use "embed-certs-413073" cluster and "default" namespace by default
	I1221 20:27:13.600450  355293 addons.go:530] duration metric: took 2.85570077s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:27:14.089929  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:14.094849  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:14.094882  355293 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:14.590379  355293 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1221 20:27:14.595270  355293 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1221 20:27:14.596370  355293 api_server.go:141] control plane version: v1.34.3
	I1221 20:27:14.596406  355293 api_server.go:131] duration metric: took 1.007034338s to wait for apiserver health ...
	I1221 20:27:14.596417  355293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:14.600490  355293 system_pods.go:59] 8 kube-system pods found
	I1221 20:27:14.600533  355293 system_pods.go:61] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:27:14.600546  355293 system_pods.go:61] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:14.600559  355293 system_pods.go:61] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:27:14.600568  355293 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:14.600578  355293 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:14.600589  355293 system_pods.go:61] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:27:14.600597  355293 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:14.600605  355293 system_pods.go:61] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:27:14.600612  355293 system_pods.go:74] duration metric: took 4.188527ms to wait for pod list to return data ...
	I1221 20:27:14.600623  355293 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:14.602947  355293 default_sa.go:45] found service account: "default"
	I1221 20:27:14.602965  355293 default_sa.go:55] duration metric: took 2.335405ms for default service account to be created ...
	I1221 20:27:14.602975  355293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 20:27:14.605791  355293 system_pods.go:86] 8 kube-system pods found
	I1221 20:27:14.605823  355293 system_pods.go:89] "coredns-66bc5c9577-bp67f" [17b70c90-6d4f-48e6-9fa7-a491c9720564] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1221 20:27:14.605839  355293 system_pods.go:89] "etcd-default-k8s-diff-port-766361" [7f7082eb-10b6-4942-8c05-fd2217a3e1b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:14.605850  355293 system_pods.go:89] "kindnet-td7vw" [75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678] Running
	I1221 20:27:14.605863  355293 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766361" [01021053-4aea-4420-925c-e9b0557ee527] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:14.605874  355293 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766361" [0685a065-2a5a-4c04-91d4-900223e9a67a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:14.605882  355293 system_pods.go:89] "kube-proxy-w9lgb" [0917f5ab-1135-421c-b15c-096a64269fab] Running
	I1221 20:27:14.605892  355293 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766361" [756d01a7-e8d0-4714-9abb-34d8d19c8115] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:14.605900  355293 system_pods.go:89] "storage-provisioner" [852bdfc6-9902-475e-90d4-df19a02320fc] Running
	I1221 20:27:14.605908  355293 system_pods.go:126] duration metric: took 2.927241ms to wait for k8s-apps to be running ...
	I1221 20:27:14.605918  355293 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 20:27:14.605963  355293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:14.620737  355293 system_svc.go:56] duration metric: took 14.812436ms WaitForService to wait for kubelet
	I1221 20:27:14.620764  355293 kubeadm.go:587] duration metric: took 3.876051255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 20:27:14.620781  355293 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:14.623820  355293 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:14.623845  355293 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:14.623864  355293 node_conditions.go:105] duration metric: took 3.074979ms to run NodePressure ...
	I1221 20:27:14.623875  355293 start.go:242] waiting for startup goroutines ...
	I1221 20:27:14.623883  355293 start.go:247] waiting for cluster config update ...
	I1221 20:27:14.623893  355293 start.go:256] writing updated cluster config ...
	I1221 20:27:14.624149  355293 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:14.627869  355293 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:14.631173  355293 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	W1221 20:27:16.635807  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:14.582532  356149 kubeadm.go:884] updating cluster {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:14.582720  356149 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:14.582775  356149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:14.616339  356149 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:14.616358  356149 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:14.616398  356149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:14.642742  356149 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:14.642760  356149 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:14.642767  356149 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1221 20:27:14.642856  356149 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-734511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:14.642923  356149 ssh_runner.go:195] Run: crio config
	I1221 20:27:14.689043  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:14.689070  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:14.689084  356149 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1221 20:27:14.689105  356149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-734511 NodeName:newest-cni-734511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:14.689219  356149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-734511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:14.689291  356149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1221 20:27:14.697326  356149 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:14.697381  356149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:14.705127  356149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:27:14.717405  356149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1221 20:27:14.731759  356149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 20:27:14.743893  356149 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:14.747260  356149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:14.756571  356149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:14.836363  356149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:14.861551  356149 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511 for IP: 192.168.76.2
	I1221 20:27:14.861572  356149 certs.go:195] generating shared ca certs ...
	I1221 20:27:14.861586  356149 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.861730  356149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:14.861776  356149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:14.861786  356149 certs.go:257] generating profile certs ...
	I1221 20:27:14.861838  356149 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key
	I1221 20:27:14.861851  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt with IP's: []
	I1221 20:27:14.969695  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt ...
	I1221 20:27:14.969723  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.crt: {Name:mk9873aa49abf1e0c21b43fa4eeaac6bd3e5af6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.969891  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key ...
	I1221 20:27:14.969903  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key: {Name:mk54cfa5fdd535a853df99958b13c9506ad5bf8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:14.969977  356149 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303
	I1221 20:27:14.969991  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1221 20:27:15.023559  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 ...
	I1221 20:27:15.023594  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303: {Name:mkeb8aae65e03e7f80ec0f686fed9ea06cda0c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.023783  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303 ...
	I1221 20:27:15.023802  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303: {Name:mk3d23054258bc709f78fde53bfd58ad79495c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.023909  356149 certs.go:382] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt.cbe81303 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt
	I1221 20:27:15.024018  356149 certs.go:386] copying /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303 -> /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key
	I1221 20:27:15.024108  356149 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key
	I1221 20:27:15.024137  356149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt with IP's: []
	I1221 20:27:15.238672  356149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt ...
	I1221 20:27:15.238700  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt: {Name:mk12ceb8fec2627da1e23919a8ad1b2d47c85a1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.238872  356149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key ...
	I1221 20:27:15.238890  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key: {Name:mk350b0a8872a865f49a834064f6447e0f7240cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:15.239094  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:15.239147  356149 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:15.239163  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:15.239199  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:15.239246  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:15.239281  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:15.239343  356149 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:15.239918  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:15.257758  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:15.274862  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:15.292146  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:15.309413  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1221 20:27:15.328072  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:15.349778  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:15.369272  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:15.389257  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:15.409819  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:15.429531  356149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:15.446818  356149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:15.458998  356149 ssh_runner.go:195] Run: openssl version
	I1221 20:27:15.465312  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.472913  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:15.480737  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.484301  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.484353  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:15.520431  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:15.528644  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1221 20:27:15.536038  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.544064  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:15.551906  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.555536  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.555579  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:15.591848  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:15.599139  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12711.pem /etc/ssl/certs/51391683.0
	I1221 20:27:15.606610  356149 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.613779  356149 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:15.620972  356149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.625110  356149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.625149  356149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:15.660450  356149 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:15.667624  356149 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/127112.pem /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:15.674835  356149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:15.678595  356149 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1221 20:27:15.678651  356149 kubeadm.go:401] StartCluster: {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:15.678723  356149 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:15.678765  356149 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:15.708139  356149 cri.go:96] found id: ""
	I1221 20:27:15.708254  356149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:15.717705  356149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 20:27:15.726595  356149 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1221 20:27:15.726664  356149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 20:27:15.735640  356149 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 20:27:15.735658  356149 kubeadm.go:158] found existing configuration files:
	
	I1221 20:27:15.735693  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1221 20:27:15.743487  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1221 20:27:15.743528  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1221 20:27:15.750424  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1221 20:27:15.757426  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1221 20:27:15.757476  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1221 20:27:15.764200  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1221 20:27:15.771497  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1221 20:27:15.771543  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1221 20:27:15.778713  356149 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1221 20:27:15.786060  356149 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1221 20:27:15.786104  356149 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1221 20:27:15.793154  356149 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 20:27:15.895321  356149 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1221 20:27:15.954184  356149 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1221 20:27:18.637834  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:21.137485  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:23.057253  356149 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1221 20:27:23.057342  356149 kubeadm.go:319] [preflight] Running pre-flight checks
	I1221 20:27:23.057464  356149 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1221 20:27:23.057536  356149 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1221 20:27:23.057581  356149 kubeadm.go:319] OS: Linux
	I1221 20:27:23.057656  356149 kubeadm.go:319] CGROUPS_CPU: enabled
	I1221 20:27:23.057734  356149 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1221 20:27:23.057805  356149 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1221 20:27:23.057892  356149 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1221 20:27:23.057979  356149 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1221 20:27:23.058048  356149 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1221 20:27:23.058117  356149 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1221 20:27:23.058158  356149 kubeadm.go:319] CGROUPS_IO: enabled
	I1221 20:27:23.058281  356149 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 20:27:23.058392  356149 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 20:27:23.058543  356149 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1221 20:27:23.058644  356149 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 20:27:23.069304  356149 out.go:252]   - Generating certificates and keys ...
	I1221 20:27:23.069398  356149 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1221 20:27:23.069491  356149 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1221 20:27:23.069583  356149 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 20:27:23.069664  356149 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1221 20:27:23.069745  356149 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1221 20:27:23.069835  356149 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1221 20:27:23.069903  356149 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1221 20:27:23.070063  356149 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-734511] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1221 20:27:23.070146  356149 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1221 20:27:23.070332  356149 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-734511] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1221 20:27:23.070450  356149 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 20:27:23.070543  356149 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 20:27:23.070613  356149 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1221 20:27:23.070693  356149 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 20:27:23.070773  356149 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 20:27:23.070851  356149 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1221 20:27:23.070934  356149 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 20:27:23.071032  356149 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 20:27:23.071140  356149 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 20:27:23.071282  356149 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 20:27:23.071375  356149 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 20:27:23.075423  356149 out.go:252]   - Booting up control plane ...
	I1221 20:27:23.075551  356149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 20:27:23.075648  356149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 20:27:23.075736  356149 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 20:27:23.075906  356149 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 20:27:23.076043  356149 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1221 20:27:23.076213  356149 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1221 20:27:23.076369  356149 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 20:27:23.076454  356149 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1221 20:27:23.076645  356149 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1221 20:27:23.076789  356149 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1221 20:27:23.076930  356149 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.116041ms
	I1221 20:27:23.077079  356149 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1221 20:27:23.077215  356149 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1221 20:27:23.077359  356149 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1221 20:27:23.077495  356149 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1221 20:27:23.077612  356149 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005819159s
	I1221 20:27:23.077698  356149 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.346286694s
	I1221 20:27:23.077780  356149 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002124897s
	I1221 20:27:23.077914  356149 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 20:27:23.078078  356149 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 20:27:23.078154  356149 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 20:27:23.078439  356149 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-734511 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 20:27:23.078512  356149 kubeadm.go:319] [bootstrap-token] Using token: s2l34i.w3afmswk2s1ke4hl
	I1221 20:27:23.099165  356149 out.go:252]   - Configuring RBAC rules ...
	I1221 20:27:23.099408  356149 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 20:27:23.099549  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 20:27:23.099770  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 20:27:23.099948  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 20:27:23.100117  356149 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 20:27:23.100319  356149 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 20:27:23.100533  356149 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 20:27:23.100614  356149 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1221 20:27:23.100683  356149 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1221 20:27:23.100690  356149 kubeadm.go:319] 
	I1221 20:27:23.100841  356149 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1221 20:27:23.100882  356149 kubeadm.go:319] 
	I1221 20:27:23.100987  356149 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1221 20:27:23.100998  356149 kubeadm.go:319] 
	I1221 20:27:23.101028  356149 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1221 20:27:23.101109  356149 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 20:27:23.101203  356149 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 20:27:23.101244  356149 kubeadm.go:319] 
	I1221 20:27:23.101321  356149 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1221 20:27:23.101339  356149 kubeadm.go:319] 
	I1221 20:27:23.101406  356149 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 20:27:23.101412  356149 kubeadm.go:319] 
	I1221 20:27:23.101618  356149 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1221 20:27:23.101822  356149 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 20:27:23.101924  356149 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 20:27:23.101934  356149 kubeadm.go:319] 
	I1221 20:27:23.102047  356149 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 20:27:23.102190  356149 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1221 20:27:23.102250  356149 kubeadm.go:319] 
	I1221 20:27:23.102358  356149 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s2l34i.w3afmswk2s1ke4hl \
	I1221 20:27:23.102486  356149 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 \
	I1221 20:27:23.102515  356149 kubeadm.go:319] 	--control-plane 
	I1221 20:27:23.102527  356149 kubeadm.go:319] 
	I1221 20:27:23.102630  356149 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1221 20:27:23.102639  356149 kubeadm.go:319] 
	I1221 20:27:23.102762  356149 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s2l34i.w3afmswk2s1ke4hl \
	I1221 20:27:23.102972  356149 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:929965d6779618deae7626b8f613e607c8cbac58d647b4036c4aa0ec90ba78e1 
	I1221 20:27:23.103002  356149 cni.go:84] Creating CNI manager for ""
	I1221 20:27:23.103014  356149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:23.178881  356149 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1221 20:27:23.215628  356149 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 20:27:23.221915  356149 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1221 20:27:23.221937  356149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1221 20:27:23.247115  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 20:27:23.751074  356149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 20:27:23.751155  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:23.751177  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-734511 minikube.k8s.io/updated_at=2025_12_21T20_27_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c minikube.k8s.io/name=newest-cni-734511 minikube.k8s.io/primary=true
	I1221 20:27:23.763199  356149 ops.go:34] apiserver oom_adj: -16
	I1221 20:27:23.858174  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1221 20:27:23.635836  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:26.135999  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:24.358431  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:24.858340  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:25.358957  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:25.859060  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:26.359028  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:26.858870  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:27.358492  356149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 20:27:27.426353  356149 kubeadm.go:1114] duration metric: took 3.675254182s to wait for elevateKubeSystemPrivileges
	I1221 20:27:27.426388  356149 kubeadm.go:403] duration metric: took 11.747742078s to StartCluster
	I1221 20:27:27.426406  356149 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:27.426483  356149 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:27.428125  356149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:27.428390  356149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 20:27:27.428401  356149 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:27.428470  356149 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:27.428573  356149 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-734511"
	I1221 20:27:27.428592  356149 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:27.428608  356149 addons.go:70] Setting default-storageclass=true in profile "newest-cni-734511"
	I1221 20:27:27.428657  356149 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-734511"
	I1221 20:27:27.428597  356149 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-734511"
	I1221 20:27:27.428819  356149 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:27.429145  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:27.429214  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:27.430724  356149 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:27.431915  356149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:27.452150  356149 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:27.454516  356149 addons.go:239] Setting addon default-storageclass=true in "newest-cni-734511"
	I1221 20:27:27.454563  356149 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:27.455034  356149 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:27.456331  356149 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:27.456353  356149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:27.456411  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:27.487696  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:27.490057  356149 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:27.490079  356149 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:27.490154  356149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:27.520004  356149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:27.539826  356149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 20:27:27.594610  356149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:27.622127  356149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:27.641802  356149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:27.738060  356149 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1221 20:27:27.740804  356149 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:27.740850  356149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:27.934182  356149 api_server.go:72] duration metric: took 505.744847ms to wait for apiserver process to appear ...
	I1221 20:27:27.934209  356149 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:27.934239  356149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:27.939810  356149 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1221 20:27:27.941167  356149 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:27:27.941189  356149 api_server.go:131] duration metric: took 6.973629ms to wait for apiserver health ...
	I1221 20:27:27.941205  356149 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:27.942432  356149 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1221 20:27:27.943665  356149 system_pods.go:59] 5 kube-system pods found
	I1221 20:27:27.943696  356149 system_pods.go:61] "etcd-newest-cni-734511" [5f6a8b90-3b7d-433a-8e62-fc0be1f726a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:27.943703  356149 system_pods.go:61] "kube-apiserver-newest-cni-734511" [d0ac5067-f06f-4fff-853f-483d61d3a345] Running
	I1221 20:27:27.943711  356149 system_pods.go:61] "kube-controller-manager-newest-cni-734511" [fcb485ed-488d-41fb-b94c-dd1321961ccd] Running
	I1221 20:27:27.943717  356149 system_pods.go:61] "kube-scheduler-newest-cni-734511" [e0670313-ee97-46e9-9090-98628a7613e7] Running
	I1221 20:27:27.943723  356149 system_pods.go:61] "storage-provisioner" [5bfed1a9-5cd0-45a6-abf9-ae34c8f2ab35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:27.943738  356149 system_pods.go:74] duration metric: took 2.526143ms to wait for pod list to return data ...
	I1221 20:27:27.943747  356149 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:27.944658  356149 addons.go:530] duration metric: took 516.188075ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1221 20:27:27.945768  356149 default_sa.go:45] found service account: "default"
	I1221 20:27:27.945791  356149 default_sa.go:55] duration metric: took 2.037615ms for default service account to be created ...
	I1221 20:27:27.945806  356149 kubeadm.go:587] duration metric: took 517.370471ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:27.945829  356149 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:27.947810  356149 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:27.947834  356149 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:27.947853  356149 node_conditions.go:105] duration metric: took 2.01341ms to run NodePressure ...
	I1221 20:27:27.947871  356149 start.go:242] waiting for startup goroutines ...
	I1221 20:27:28.242850  356149 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-734511" context rescaled to 1 replicas
	I1221 20:27:28.242892  356149 start.go:247] waiting for cluster config update ...
	I1221 20:27:28.242906  356149 start.go:256] writing updated cluster config ...
	I1221 20:27:28.243206  356149 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:28.304493  356149 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:28.312905  356149 out.go:179] * Done! kubectl is now configured to use "newest-cni-734511" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 20:27:18 newest-cni-734511 crio[774]: time="2025-12-21T20:27:18.128278234Z" level=info msg="Started container" PID=1215 containerID=b61ae48ed3490c6be5a7a5cd7f006059b24756e15aeb45b8f47039065a026f81 description=kube-system/kube-controller-manager-newest-cni-734511/kube-controller-manager id=73a4e7a6-96bc-43cb-ba07-5c05c081d778 name=/runtime.v1.RuntimeService/StartContainer sandboxID=417d04c7af98e99106da937ab0472b8afe38ea30f7fa8b383346eeb21947448b
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.276423699Z" level=info msg="Running pod sandbox: kube-system/kindnet-ztvbb/POD" id=8b077929-9aa8-40d7-930e-1792c79ebb62 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.276529716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.277093483Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-9mrbd/POD" id=56d83919-5987-476f-849a-7cc391c11797 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.277163564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.281910867Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8b077929-9aa8-40d7-930e-1792c79ebb62 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.282975921Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=56d83919-5987-476f-849a-7cc391c11797 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.283633409Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.284473822Z" level=info msg="Ran pod sandbox ae7db77a33bd359ad73f489012a981ee965e86365316c43377f3d22b1a8db2be with infra container: kube-system/kindnet-ztvbb/POD" id=8b077929-9aa8-40d7-930e-1792c79ebb62 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.284732839Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.285655626Z" level=info msg="Ran pod sandbox d3f572a2b946536618b73bdd20d48fef9901565728c6625dee865eda79575cfd with infra container: kube-system/kube-proxy-9mrbd/POD" id=56d83919-5987-476f-849a-7cc391c11797 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.285747431Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=4d82a50b-84a0-4622-a3fb-b62ca929eb27 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.285886948Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=4d82a50b-84a0-4622-a3fb-b62ca929eb27 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.285946509Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=4d82a50b-84a0-4622-a3fb-b62ca929eb27 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.286668178Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=72dff1df-785a-401e-a4f8-32d348132fc1 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.286972254Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=d891a547-3fed-4b26-a53e-7a074d09a279 name=/runtime.v1.ImageService/PullImage
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.287560478Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=6db775fe-f3a2-4f7e-bbd0-caffe56a3e00 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.289969564Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.292218628Z" level=info msg="Creating container: kube-system/kube-proxy-9mrbd/kube-proxy" id=69074afa-5f39-4897-b5b9-207482aa8965 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.292562826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.298050461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.298783955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.349995718Z" level=info msg="Created container 7dbcbf4dca575e1ab42a6c906c251b36b0a64278cf8395a52e91ca152d2197a1: kube-system/kube-proxy-9mrbd/kube-proxy" id=69074afa-5f39-4897-b5b9-207482aa8965 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.350864623Z" level=info msg="Starting container: 7dbcbf4dca575e1ab42a6c906c251b36b0a64278cf8395a52e91ca152d2197a1" id=4867c720-1c5f-42ef-84e0-c497245fa0cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:28 newest-cni-734511 crio[774]: time="2025-12-21T20:27:28.354699455Z" level=info msg="Started container" PID=1570 containerID=7dbcbf4dca575e1ab42a6c906c251b36b0a64278cf8395a52e91ca152d2197a1 description=kube-system/kube-proxy-9mrbd/kube-proxy id=4867c720-1c5f-42ef-84e0-c497245fa0cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3f572a2b946536618b73bdd20d48fef9901565728c6625dee865eda79575cfd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7dbcbf4dca575       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   1 second ago        Running             kube-proxy                0                   d3f572a2b9465       kube-proxy-9mrbd                            kube-system
	9071f2a902df3       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   11 seconds ago      Running             kube-scheduler            0                   9bcc9fe3d0f7e       kube-scheduler-newest-cni-734511            kube-system
	42e0789a2bcf2       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   11 seconds ago      Running             etcd                      0                   54f55b8cd5ed1       etcd-newest-cni-734511                      kube-system
	b61ae48ed3490       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   11 seconds ago      Running             kube-controller-manager   0                   417d04c7af98e       kube-controller-manager-newest-cni-734511   kube-system
	0ec6584917ed2       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   11 seconds ago      Running             kube-apiserver            0                   1455e70f8fb32       kube-apiserver-newest-cni-734511            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-734511
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-734511
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=newest-cni-734511
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_27_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:27:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-734511
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:27:22 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:27:22 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:27:22 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 21 Dec 2025 20:27:22 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-734511
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                ac30e952-d18a-4d33-99ce-65bf90d321e1
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-734511                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-ztvbb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-734511             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-734511    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-9mrbd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-734511             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-734511 event: Registered Node newest-cni-734511 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [42e0789a2bcf224a7593961358ecea8b5a62c15143ca9111abb3866aa4e49f37] <==
	{"level":"info","ts":"2025-12-21T20:27:18.865666Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:18.866248Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:18.866290Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:27:18.866312Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:18.866323Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:18.867097Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-21T20:27:18.867642Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-734511 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-21T20:27:18.867651Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:27:18.867675Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:27:18.867883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:27:18.867899Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-21T20:27:18.867912Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:27:18.867976Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-21T20:27:18.868017Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-21T20:27:18.868061Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-21T20:27:18.868248Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-21T20:27:18.868983Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:27:18.868993Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:27:18.874095Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:27:18.874105Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2025-12-21T20:27:23.728183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.661811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-21T20:27:23.728314Z","caller":"traceutil/trace.go:172","msg":"trace[1559941349] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:0; response_revision:306; }","duration":"164.830883ms","start":"2025-12-21T20:27:23.563465Z","end":"2025-12-21T20:27:23.728296Z","steps":["trace[1559941349] 'agreement among raft nodes before linearized reading'  (duration: 78.582848ms)","trace[1559941349] 'range keys from in-memory index tree'  (duration: 86.042243ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T20:27:23.728316Z","caller":"traceutil/trace.go:172","msg":"trace[246494396] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"209.428879ms","start":"2025-12-21T20:27:23.518863Z","end":"2025-12-21T20:27:23.728292Z","steps":["trace[246494396] 'process raft request'  (duration: 123.217616ms)","trace[246494396] 'compare'  (duration: 86.031704ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-21T20:27:23.728564Z","caller":"traceutil/trace.go:172","msg":"trace[1940108966] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"178.832386ms","start":"2025-12-21T20:27:23.549719Z","end":"2025-12-21T20:27:23.728551Z","steps":["trace[1940108966] 'process raft request'  (duration: 178.541921ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-21T20:27:23.728656Z","caller":"traceutil/trace.go:172","msg":"trace[201838937] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"177.812247ms","start":"2025-12-21T20:27:23.550828Z","end":"2025-12-21T20:27:23.728640Z","steps":["trace[201838937] 'process raft request'  (duration: 177.675168ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:27:29 up  1:09,  0 user,  load average: 4.71, 4.00, 2.84
	Linux newest-cni-734511 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [0ec6584917ed239b8b1cc5e9d8deb62a2d11f1cc165ea5295f39e7b3877206bb] <==
	I1221 20:27:20.043690       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1221 20:27:20.075497       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1221 20:27:20.122754       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:27:20.124978       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:27:20.124994       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1221 20:27:20.128909       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:27:20.128956       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1221 20:27:20.216633       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:27:20.925780       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1221 20:27:20.929685       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1221 20:27:20.929699       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:27:21.478842       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:27:21.536009       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:27:21.629951       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 20:27:21.639568       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1221 20:27:21.641725       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:27:21.646242       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:27:21.955268       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:27:22.447370       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:27:22.460186       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 20:27:22.468089       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1221 20:27:27.757994       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:27:27.907717       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:27:27.911531       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:27:27.955014       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b61ae48ed3490c6be5a7a5cd7f006059b24756e15aeb45b8f47039065a026f81] <==
	I1221 20:27:26.759559       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760161       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-734511"
	I1221 20:27:26.759798       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760257       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1221 20:27:26.760614       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760624       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760745       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760753       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760761       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760771       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760781       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760789       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760796       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.760954       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.761127       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.761586       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.764426       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.764534       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.765186       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.766715       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-734511" podCIDRs=["10.42.0.0/24"]
	I1221 20:27:26.768439       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:26.861653       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:26.861682       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:27:26.861689       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:27:26.869050       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [7dbcbf4dca575e1ab42a6c906c251b36b0a64278cf8395a52e91ca152d2197a1] <==
	I1221 20:27:28.417835       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:27:28.473464       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:28.574315       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:28.574361       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1221 20:27:28.574515       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:27:28.595909       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:27:28.595986       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:27:28.601999       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:27:28.602407       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:27:28.602426       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:28.603644       1 config.go:200] "Starting service config controller"
	I1221 20:27:28.603665       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:27:28.603769       1 config.go:309] "Starting node config controller"
	I1221 20:27:28.603849       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:27:28.603879       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:27:28.603932       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:27:28.603994       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:27:28.603937       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:27:28.604016       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:27:28.704514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:27:28.704587       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:27:28.704646       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9071f2a902df33d68f87f9b8cbcf247515527cbcb434f54c99474ad93dda8205] <==
	E1221 20:27:19.976784       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1221 20:27:19.976793       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1221 20:27:19.977065       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1221 20:27:19.977066       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1221 20:27:19.977122       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1221 20:27:19.977218       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1221 20:27:19.977326       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1221 20:27:19.977329       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1221 20:27:20.808714       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1221 20:27:20.824219       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1221 20:27:20.868691       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1221 20:27:20.882111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1221 20:27:20.900332       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1221 20:27:20.923389       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1221 20:27:20.939459       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1221 20:27:20.941199       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1221 20:27:20.946099       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1221 20:27:20.950107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1221 20:27:21.058184       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1221 20:27:21.075531       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1221 20:27:21.081789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1221 20:27:21.093503       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1221 20:27:21.131882       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1221 20:27:21.284981       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	I1221 20:27:23.470944       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 21 20:27:23 newest-cni-734511 kubelet[1296]: E1221 20:27:23.443954    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-734511" containerName="kube-apiserver"
	Dec 21 20:27:23 newest-cni-734511 kubelet[1296]: E1221 20:27:23.443982    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-734511" containerName="kube-controller-manager"
	Dec 21 20:27:23 newest-cni-734511 kubelet[1296]: E1221 20:27:23.444091    1296 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-734511\" already exists" pod="kube-system/kube-scheduler-newest-cni-734511"
	Dec 21 20:27:23 newest-cni-734511 kubelet[1296]: E1221 20:27:23.444118    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:23 newest-cni-734511 kubelet[1296]: I1221 20:27:23.510367    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-734511" podStartSLOduration=1.51034466 podStartE2EDuration="1.51034466s" podCreationTimestamp="2025-12-21 20:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:27:23.480037302 +0000 UTC m=+1.233578006" watchObservedRunningTime="2025-12-21 20:27:23.51034466 +0000 UTC m=+1.263885363"
	Dec 21 20:27:23 newest-cni-734511 kubelet[1296]: I1221 20:27:23.730342    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-734511" podStartSLOduration=1.730320393 podStartE2EDuration="1.730320393s" podCreationTimestamp="2025-12-21 20:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:27:23.510587538 +0000 UTC m=+1.264128241" watchObservedRunningTime="2025-12-21 20:27:23.730320393 +0000 UTC m=+1.483861096"
	Dec 21 20:27:23 newest-cni-734511 kubelet[1296]: I1221 20:27:23.742427    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-734511" podStartSLOduration=1.742410212 podStartE2EDuration="1.742410212s" podCreationTimestamp="2025-12-21 20:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:27:23.730164513 +0000 UTC m=+1.483705217" watchObservedRunningTime="2025-12-21 20:27:23.742410212 +0000 UTC m=+1.495950917"
	Dec 21 20:27:24 newest-cni-734511 kubelet[1296]: E1221 20:27:24.392937    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-734511" containerName="kube-controller-manager"
	Dec 21 20:27:24 newest-cni-734511 kubelet[1296]: E1221 20:27:24.393078    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:24 newest-cni-734511 kubelet[1296]: E1221 20:27:24.393134    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-734511" containerName="etcd"
	Dec 21 20:27:24 newest-cni-734511 kubelet[1296]: E1221 20:27:24.393370    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-734511" containerName="kube-apiserver"
	Dec 21 20:27:25 newest-cni-734511 kubelet[1296]: E1221 20:27:25.394931    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:26 newest-cni-734511 kubelet[1296]: E1221 20:27:26.396294    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:26 newest-cni-734511 kubelet[1296]: I1221 20:27:26.803854    1296 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 21 20:27:26 newest-cni-734511 kubelet[1296]: I1221 20:27:26.804834    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.080707    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-cni-cfg\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.080774    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-xtables-lock\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.080821    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-lib-modules\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.080846    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbld\" (UniqueName: \"kubernetes.io/projected/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-kube-api-access-5rbld\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.080874    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/462d4133-ac15-436a-91fe-13e1ec9c1430-xtables-lock\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.081013    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/462d4133-ac15-436a-91fe-13e1ec9c1430-kube-proxy\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.081051    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/462d4133-ac15-436a-91fe-13e1ec9c1430-lib-modules\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: I1221 20:27:28.081078    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gvp9\" (UniqueName: \"kubernetes.io/projected/462d4133-ac15-436a-91fe-13e1ec9c1430-kube-api-access-7gvp9\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:28 newest-cni-734511 kubelet[1296]: E1221 20:27:28.761931    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-734511" containerName="kube-apiserver"
	Dec 21 20:27:29 newest-cni-734511 kubelet[1296]: I1221 20:27:29.427977    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-9mrbd" podStartSLOduration=2.427953702 podStartE2EDuration="2.427953702s" podCreationTimestamp="2025-12-21 20:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-21 20:27:29.427385013 +0000 UTC m=+7.180925716" watchObservedRunningTime="2025-12-21 20:27:29.427953702 +0000 UTC m=+7.181494408"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734511 -n newest-cni-734511
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-734511 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jlczz storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner: exit status 1 (75.574438ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jlczz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-734511 --alsologtostderr -v=1
E1221 20:27:52.436580   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-734511 --alsologtostderr -v=1: exit status 80 (2.262830114s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-734511 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:27:50.489081  369225 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:50.489368  369225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:50.489378  369225 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:50.489383  369225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:50.489623  369225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:50.489953  369225 out.go:368] Setting JSON to false
	I1221 20:27:50.489979  369225 mustload.go:66] Loading cluster: newest-cni-734511
	I1221 20:27:50.490538  369225 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:50.491055  369225 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:50.509301  369225 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:50.509623  369225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:50.564761  369225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-21 20:27:50.555550178 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:50.565465  369225 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-734511 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotific
ation:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1221 20:27:50.567259  369225 out.go:179] * Pausing node newest-cni-734511 ... 
	I1221 20:27:50.568425  369225 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:50.568699  369225 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:50.568752  369225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:50.587836  369225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:50.685378  369225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:50.696937  369225 pause.go:52] kubelet running: true
	I1221 20:27:50.697015  369225 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:50.837808  369225 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:50.837892  369225 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:50.900372  369225 cri.go:96] found id: "b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5"
	I1221 20:27:50.900393  369225 cri.go:96] found id: "7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97"
	I1221 20:27:50.900397  369225 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:50.900400  369225 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:50.900403  369225 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:50.900406  369225 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:50.900409  369225 cri.go:96] found id: ""
	I1221 20:27:50.900452  369225 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:50.911367  369225 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:50Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:51.038705  369225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:51.050946  369225 pause.go:52] kubelet running: false
	I1221 20:27:51.050997  369225 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:51.164436  369225 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:51.164534  369225 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:51.228650  369225 cri.go:96] found id: "b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5"
	I1221 20:27:51.228680  369225 cri.go:96] found id: "7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97"
	I1221 20:27:51.228685  369225 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:51.228690  369225 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:51.228695  369225 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:51.228698  369225 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:51.228701  369225 cri.go:96] found id: ""
	I1221 20:27:51.228738  369225 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:51.473634  369225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:51.486957  369225 pause.go:52] kubelet running: false
	I1221 20:27:51.487020  369225 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:51.606621  369225 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:51.606699  369225 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:51.670519  369225 cri.go:96] found id: "b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5"
	I1221 20:27:51.670546  369225 cri.go:96] found id: "7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97"
	I1221 20:27:51.670552  369225 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:51.670556  369225 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:51.670560  369225 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:51.670564  369225 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:51.670569  369225 cri.go:96] found id: ""
	I1221 20:27:51.670615  369225 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:52.484916  369225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:52.497762  369225 pause.go:52] kubelet running: false
	I1221 20:27:52.497840  369225 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:52.607722  369225 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:52.607807  369225 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:52.672349  369225 cri.go:96] found id: "b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5"
	I1221 20:27:52.672374  369225 cri.go:96] found id: "7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97"
	I1221 20:27:52.672381  369225 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:52.672386  369225 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:52.672391  369225 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:52.672396  369225 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:52.672400  369225 cri.go:96] found id: ""
	I1221 20:27:52.672444  369225 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:52.685317  369225 out.go:203] 
	W1221 20:27:52.686466  369225 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 20:27:52.686484  369225 out.go:285] * 
	* 
	W1221 20:27:52.690493  369225 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 20:27:52.691746  369225 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-734511 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-734511
helpers_test.go:244: (dbg) docker inspect newest-cni-734511:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca",
	        "Created": "2025-12-21T20:27:08.312566365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:27:39.952942318Z",
	            "FinishedAt": "2025-12-21T20:27:38.922949694Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/hostname",
	        "HostsPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/hosts",
	        "LogPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca-json.log",
	        "Name": "/newest-cni-734511",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-734511:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-734511",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca",
	                "LowerDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-734511",
	                "Source": "/var/lib/docker/volumes/newest-cni-734511/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-734511",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-734511",
	                "name.minikube.sigs.k8s.io": "newest-cni-734511",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d2fd62fdf1e67c72486733620259ef3e8e6a6ada105e62cfc532374fbb351cee",
	            "SandboxKey": "/var/run/docker/netns/d2fd62fdf1e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-734511": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14816134e98be2c6f9635a0cd5947ae7aa1c8333188fd4c39e01a9672f929d75",
	                    "EndpointID": "e6bc62d1bf054993a4a162d4506d5cd466701e4f5a86ec56ac04e72df7b571c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:5e:16:47:51:ae",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-734511",
	                        "f11eda59f7a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511: exit status 2 (308.892601ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-734511 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬────
─────────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼────
─────────────────┤
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ embed-certs-413073 image list --format=json                                                                                                                                                                                                        │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p embed-certs-413073 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                             │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ stop    │ -p newest-cni-734511 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p test-preload-dl-gcs-162834                                                                                                                                                                                                                      │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-github-984988 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                       │ test-preload-dl-github-984988     │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p newest-cni-734511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-832404 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                      │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-832404                                                                                                                                                                                                               │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ newest-cni-734511 image list --format=json                                                                                                                                                                                                         │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p newest-cni-734511 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴────
─────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:39.861418  366911 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:39.861689  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861699  366911 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:39.861716  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861952  366911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:39.862461  366911 out.go:368] Setting JSON to false
	I1221 20:27:39.863571  366911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4209,"bootTime":1766344651,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:39.863625  366911 start.go:143] virtualization: kvm guest
	I1221 20:27:39.865281  366911 out.go:179] * [test-preload-dl-gcs-cached-832404] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:39.866365  366911 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:39.866394  366911 notify.go:221] Checking for updates...
	I1221 20:27:39.868343  366911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:39.869547  366911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:39.870766  366911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:39.871777  366911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:39.872792  366911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:39.813750  366768 start.go:309] selected driver: docker
	I1221 20:27:39.813763  366768 start.go:928] validating driver "docker" against &{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.813865  366768 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:27:39.814431  366768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.876119  366768 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-21 20:27:39.8661201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.876540  366768 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:39.876591  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:39.876661  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:39.876724  366768 start.go:353] cluster config:
	{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.878915  366768 out.go:179] * Starting "newest-cni-734511" primary control-plane node in "newest-cni-734511" cluster
	I1221 20:27:39.879838  366768 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:39.880931  366768 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:39.881866  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:39.881912  366768 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:39.881925  366768 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:39.881974  366768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:39.882031  366768 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:39.882046  366768 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1221 20:27:39.882176  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:39.903361  366768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:39.903382  366768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:27:39.903398  366768 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:39.903430  366768 start.go:360] acquireMachinesLock for newest-cni-734511: {Name:mk73e51f1f54bba023ba70ceb2589863fd06b9dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:27:39.903492  366768 start.go:364] duration metric: took 34.632µs to acquireMachinesLock for "newest-cni-734511"
	I1221 20:27:39.903512  366768 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:27:39.903523  366768 fix.go:54] fixHost starting: 
	I1221 20:27:39.903753  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:39.923053  366768 fix.go:112] recreateIfNeeded on newest-cni-734511: state=Stopped err=<nil>
	W1221 20:27:39.923121  366768 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 20:27:39.874491  366911 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:39.874647  366911 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:39.874760  366911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:39.901645  366911 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:39.901739  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.958327  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-21 20:27:39.948377601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.958440  366911 docker.go:319] overlay module found
	I1221 20:27:39.959925  366911 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:39.961104  366911 start.go:309] selected driver: docker
	I1221 20:27:39.961123  366911 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:39.961304  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:40.019442  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-21 20:27:40.008652501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:40.019675  366911 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 20:27:40.020403  366911 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 20:27:40.020608  366911 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 20:27:40.023852  366911 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:40.025067  366911 cni.go:84] Creating CNI manager for ""
	I1221 20:27:40.025144  366911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:40.025159  366911 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:40.025380  366911 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-832404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-832404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I1221 20:27:40.026811  366911 out.go:179] * Starting "test-preload-dl-gcs-cached-832404" primary control-plane node in "test-preload-dl-gcs-cached-832404" cluster
	I1221 20:27:40.028161  366911 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:40.030043  366911 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:40.031142  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031190  366911 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:40.031200  366911 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:40.031280  366911 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:40.031312  366911 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:40.031323  366911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I1221 20:27:40.031455  366911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json ...
	I1221 20:27:40.031477  366911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json: {Name:mkf6696e0851cdf6856c1ee2548d89a9b19f171c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:40.031631  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031707  366911 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I1221 20:27:40.056706  366911 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:40.056732  366911 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1221 20:27:40.056815  366911 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1221 20:27:40.056829  366911 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1221 20:27:40.056833  366911 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1221 20:27:40.056842  366911 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1221 20:27:40.056853  366911 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:40.058381  366911 out.go:179] * Download complete!
	W1221 20:27:39.136122  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:41.635776  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:39.924855  366768 out.go:252] * Restarting existing docker container for "newest-cni-734511" ...
	I1221 20:27:39.924929  366768 cli_runner.go:164] Run: docker start newest-cni-734511
	I1221 20:27:40.181723  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:40.200215  366768 kic.go:430] container "newest-cni-734511" state is running.
	I1221 20:27:40.200630  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:40.221078  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:40.221314  366768 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:40.221390  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:40.240477  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:40.240777  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:40.240791  366768 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:40.241508  366768 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52326->127.0.0.1:33139: read: connection reset by peer
	I1221 20:27:43.377002  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.377031  366768 ubuntu.go:182] provisioning hostname "newest-cni-734511"
	I1221 20:27:43.377090  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.394956  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.395200  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.395215  366768 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-734511 && echo "newest-cni-734511" | sudo tee /etc/hostname
	I1221 20:27:43.540257  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.540338  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.558595  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.558789  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.558805  366768 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-734511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-734511/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-734511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:43.693472  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:43.693519  366768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:43.693547  366768 ubuntu.go:190] setting up certificates
	I1221 20:27:43.693561  366768 provision.go:84] configureAuth start
	I1221 20:27:43.693606  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:43.711122  366768 provision.go:143] copyHostCerts
	I1221 20:27:43.711190  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:43.711206  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:43.711307  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:43.711418  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:43.711428  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:43.711455  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:43.711526  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:43.711534  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:43.711556  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:43.711608  366768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.newest-cni-734511 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-734511]
	I1221 20:27:43.863689  366768 provision.go:177] copyRemoteCerts
	I1221 20:27:43.863758  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:43.863795  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.880942  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:43.976993  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:43.994083  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1221 20:27:44.010099  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:44.026129  366768 provision.go:87] duration metric: took 332.557611ms to configureAuth
	I1221 20:27:44.026157  366768 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:44.026344  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:44.026447  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.044140  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:44.044410  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:44.044442  366768 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:27:44.337510  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:44.337537  366768 machine.go:97] duration metric: took 4.116205242s to provisionDockerMachine
	I1221 20:27:44.337550  366768 start.go:293] postStartSetup for "newest-cni-734511" (driver="docker")
	I1221 20:27:44.337565  366768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:44.337645  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:44.337696  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.356430  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.456570  366768 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:44.460019  366768 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:44.460045  366768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:44.460055  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:44.460115  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:44.460217  366768 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:44.460366  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:44.467484  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:44.484578  366768 start.go:296] duration metric: took 147.011218ms for postStartSetup
	I1221 20:27:44.484652  366768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:44.484701  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.502940  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.597000  366768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:44.601372  366768 fix.go:56] duration metric: took 4.697843581s for fixHost
	I1221 20:27:44.601398  366768 start.go:83] releasing machines lock for "newest-cni-734511", held for 4.697894238s
	I1221 20:27:44.601460  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:44.619235  366768 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:44.619305  366768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:44.619325  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.619372  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.640849  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.641206  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.788588  366768 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:44.794953  366768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:44.828982  366768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:44.833576  366768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:44.833632  366768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:44.841303  366768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:27:44.841323  366768 start.go:496] detecting cgroup driver to use...
	I1221 20:27:44.841355  366768 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:44.841399  366768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:44.854483  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:44.866035  366768 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:44.866075  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:44.879803  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:44.891096  366768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:44.962811  366768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:45.036580  366768 docker.go:234] disabling docker service ...
	I1221 20:27:45.036655  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:45.049959  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:45.061658  366768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:45.143449  366768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:45.222903  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:45.237087  366768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:45.250978  366768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:45.251037  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.259700  366768 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:45.259758  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.268003  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.276177  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.284319  366768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:45.291742  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.299910  366768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.307415  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.315340  366768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:45.322121  366768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:45.328957  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.401093  366768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:45.538335  366768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:45.538418  366768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:45.542214  366768 start.go:564] Will wait 60s for crictl version
	I1221 20:27:45.542281  366768 ssh_runner.go:195] Run: which crictl
	I1221 20:27:45.545577  366768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:45.568875  366768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:45.568942  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.595166  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.623728  366768 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1221 20:27:45.624987  366768 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:45.644329  366768 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:45.649761  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.662664  366768 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1221 20:27:45.663704  366768 kubeadm.go:884] updating cluster {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:45.663826  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:45.663883  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.694292  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.694315  366768 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:45.694369  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.718991  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.719012  366768 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:45.719021  366768 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1221 20:27:45.719114  366768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-734511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:45.719176  366768 ssh_runner.go:195] Run: crio config
	I1221 20:27:45.762367  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:45.762384  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:45.762397  366768 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1221 20:27:45.762418  366768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-734511 NodeName:newest-cni-734511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:45.762543  366768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-734511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:45.762599  366768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1221 20:27:45.770445  366768 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:45.770499  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:45.778476  366768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:27:45.790329  366768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1221 20:27:45.801764  366768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 20:27:45.813017  366768 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:45.816383  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.825744  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.897847  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:45.922243  366768 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511 for IP: 192.168.76.2
	I1221 20:27:45.922261  366768 certs.go:195] generating shared ca certs ...
	I1221 20:27:45.922276  366768 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:45.922431  366768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:45.922536  366768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:45.922554  366768 certs.go:257] generating profile certs ...
	I1221 20:27:45.922657  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key
	I1221 20:27:45.922734  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303
	I1221 20:27:45.922785  366768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key
	I1221 20:27:45.922933  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:45.922989  366768 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:45.923004  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:45.923043  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:45.923080  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:45.923115  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:45.923174  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:45.923964  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:45.941766  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:45.959821  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:45.977641  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:45.999591  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1221 20:27:46.017180  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:46.033291  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:46.049616  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:46.065936  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:46.082176  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:46.100908  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:46.118404  366768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:46.130148  366768 ssh_runner.go:195] Run: openssl version
	I1221 20:27:46.135988  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.143205  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:46.150252  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153722  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153769  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.187692  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:46.195104  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.201979  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:46.209200  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212567  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212618  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.247457  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:46.254920  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.261949  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:46.268910  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272330  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272382  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.306863  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:46.313724  366768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:46.317164  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:27:46.350547  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:27:46.384130  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:27:46.422703  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:27:46.467027  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:27:46.517807  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:27:46.567421  366768 kubeadm.go:401] StartCluster: {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:46.567522  366768 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:46.567577  366768 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:46.602496  366768 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:46.602528  366768 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:46.602535  366768 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:46.602540  366768 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:46.602544  366768 cri.go:96] found id: ""
	I1221 20:27:46.602592  366768 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:27:46.614070  366768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:46Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:46.614136  366768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:46.621873  366768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:27:46.621908  366768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:27:46.621949  366768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:27:46.629767  366768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:27:46.630431  366768 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-734511" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.630721  366768 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-734511" cluster setting kubeconfig missing "newest-cni-734511" context setting]
	I1221 20:27:46.631272  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.632955  366768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:27:46.640752  366768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1221 20:27:46.640781  366768 kubeadm.go:602] duration metric: took 18.866801ms to restartPrimaryControlPlane
	I1221 20:27:46.640790  366768 kubeadm.go:403] duration metric: took 73.379872ms to StartCluster
	I1221 20:27:46.640811  366768 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.640881  366768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.641874  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.642101  366768 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:46.642274  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:46.642329  366768 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:46.642383  366768 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-734511"
	I1221 20:27:46.642399  366768 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-734511"
	W1221 20:27:46.642406  366768 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:27:46.642425  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642739  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.642780  366768 addons.go:70] Setting dashboard=true in profile "newest-cni-734511"
	I1221 20:27:46.642802  366768 addons.go:239] Setting addon dashboard=true in "newest-cni-734511"
	W1221 20:27:46.642810  366768 addons.go:248] addon dashboard should already be in state true
	I1221 20:27:46.642825  366768 addons.go:70] Setting default-storageclass=true in profile "newest-cni-734511"
	I1221 20:27:46.642836  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642853  366768 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-734511"
	I1221 20:27:46.643163  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.643341  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.644615  366768 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:46.646107  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:46.668529  366768 addons.go:239] Setting addon default-storageclass=true in "newest-cni-734511"
	W1221 20:27:46.668549  366768 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:27:46.668571  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.668906  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.669412  366768 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:27:46.669424  366768 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:46.670744  366768 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1221 20:27:43.636464  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:45.637045  355293 pod_ready.go:94] pod "coredns-66bc5c9577-bp67f" is "Ready"
	I1221 20:27:45.637079  355293 pod_ready.go:86] duration metric: took 31.005880117s for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.639371  355293 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.643368  355293 pod_ready.go:94] pod "etcd-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.643393  355293 pod_ready.go:86] duration metric: took 3.995822ms for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.645204  355293 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.649549  355293 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.649576  355293 pod_ready.go:86] duration metric: took 4.334095ms for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.651465  355293 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.835343  355293 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.835366  355293 pod_ready.go:86] duration metric: took 183.883765ms for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.035541  355293 pod_ready.go:83] waiting for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.435715  355293 pod_ready.go:94] pod "kube-proxy-w9lgb" is "Ready"
	I1221 20:27:46.435746  355293 pod_ready.go:86] duration metric: took 400.180233ms for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.634643  355293 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034660  355293 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:47.034685  355293 pod_ready.go:86] duration metric: took 400.019644ms for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034697  355293 pod_ready.go:40] duration metric: took 32.40680352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:47.076294  355293 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:27:47.077955  355293 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-766361" cluster and "default" namespace by default
	I1221 20:27:46.670728  366768 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.670797  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:46.670848  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.671763  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:27:46.671780  366768 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:27:46.671829  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.700977  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.702794  366768 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.702814  366768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:46.702867  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.708071  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.726576  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.783599  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:46.796337  366768 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:46.796401  366768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:46.809276  366768 api_server.go:72] duration metric: took 167.144497ms to wait for apiserver process to appear ...
	I1221 20:27:46.809302  366768 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:46.809324  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:46.817287  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.821194  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:27:46.821236  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:27:46.837316  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:27:46.837342  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:27:46.838461  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.852066  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:27:46.852094  366768 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:27:46.867040  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:27:46.867061  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:27:46.880590  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:27:46.880613  366768 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:27:46.893474  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:27:46.893500  366768 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:27:46.905440  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:27:46.905462  366768 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:27:46.917382  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:27:46.917402  366768 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:27:46.929133  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:46.929151  366768 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:27:46.941146  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:48.329199  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1221 20:27:48.329247  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1221 20:27:48.329271  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.340161  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.340244  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.809402  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.813323  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.813346  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.847081  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.029754993s)
	I1221 20:27:48.847159  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00866423s)
	I1221 20:27:48.847289  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.906109829s)
	I1221 20:27:48.850396  366768 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-734511 addons enable metrics-server
	
	I1221 20:27:48.857477  366768 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1221 20:27:48.858708  366768 addons.go:530] duration metric: took 2.216387065s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:27:49.309469  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.314167  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:49.314201  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:49.809466  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.813534  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1221 20:27:49.814524  366768 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:27:49.814550  366768 api_server.go:131] duration metric: took 3.005240792s to wait for apiserver health ...
	I1221 20:27:49.814561  366768 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:49.818217  366768 system_pods.go:59] 8 kube-system pods found
	I1221 20:27:49.818279  366768 system_pods.go:61] "coredns-7d764666f9-jlczz" [8571aecb-77d8-4d07-90b2-fd10aca80bcd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818296  366768 system_pods.go:61] "etcd-newest-cni-734511" [5f6a8b90-3b7d-433a-8e62-fc0be1f726a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:49.818307  366768 system_pods.go:61] "kindnet-ztvbb" [0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:27:49.818319  366768 system_pods.go:61] "kube-apiserver-newest-cni-734511" [d0ac5067-f06f-4fff-853f-483d61d3a345] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:49.818330  366768 system_pods.go:61] "kube-controller-manager-newest-cni-734511" [fcb485ed-488d-41fb-b94c-dd1321961ccd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:49.818340  366768 system_pods.go:61] "kube-proxy-9mrbd" [462d4133-ac15-436a-91fe-13e1ec9c1430] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:27:49.818346  366768 system_pods.go:61] "kube-scheduler-newest-cni-734511" [e0670313-ee97-46e9-9090-98628a7613e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:49.818353  366768 system_pods.go:61] "storage-provisioner" [5bfed1a9-5cd0-45a6-abf9-ae34c8f2ab35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818359  366768 system_pods.go:74] duration metric: took 3.791516ms to wait for pod list to return data ...
	I1221 20:27:49.818368  366768 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:49.820504  366768 default_sa.go:45] found service account: "default"
	I1221 20:27:49.820526  366768 default_sa.go:55] duration metric: took 2.152518ms for default service account to be created ...
	I1221 20:27:49.820542  366768 kubeadm.go:587] duration metric: took 3.178410939s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:49.820567  366768 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:49.822831  366768 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:49.822855  366768 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:49.822871  366768 node_conditions.go:105] duration metric: took 2.298304ms to run NodePressure ...
	I1221 20:27:49.822886  366768 start.go:242] waiting for startup goroutines ...
	I1221 20:27:49.822900  366768 start.go:247] waiting for cluster config update ...
	I1221 20:27:49.822919  366768 start.go:256] writing updated cluster config ...
	I1221 20:27:49.823160  366768 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:49.870266  366768 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:49.872014  366768 out.go:179] * Done! kubectl is now configured to use "newest-cni-734511" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.29327799Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-9mrbd/POD" id=c95a3ec7-2f70-4ed2-b4c9-d6a345954c70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.293344217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.294172486Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.295121027Z" level=info msg="Ran pod sandbox e34e7f1488246f31d65d40ed56a19488b09667c154265d401e51a4a1c4022717 with infra container: kube-system/kindnet-ztvbb/POD" id=8cab90ef-b03f-42f0-9e4a-56badcd300d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.296180568Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=bf95da90-7bd1-4c74-8207-1648d53fe56c name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.296253007Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c95a3ec7-2f70-4ed2-b4c9-d6a345954c70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.29708702Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=ea3a8da1-390a-49eb-b8f0-04493fd7d9a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.297865103Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.298154635Z" level=info msg="Creating container: kube-system/kindnet-ztvbb/kindnet-cni" id=4484e614-7ad7-4e99-af06-cbcba3d7b876 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.298274315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.298597125Z" level=info msg="Ran pod sandbox 5ca8c9370d0bb3523d144b2cce06dd6d80fdb5301beb4a87fbf8469878c96f2a with infra container: kube-system/kube-proxy-9mrbd/POD" id=c95a3ec7-2f70-4ed2-b4c9-d6a345954c70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.299546066Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ced98bf2-85be-42b8-9bb4-71f3366e4bae name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.301541106Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=deb4ef2f-0aa2-4954-a359-6f9b591f51d9 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302014093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302636742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302901922Z" level=info msg="Creating container: kube-system/kube-proxy-9mrbd/kube-proxy" id=ab764b96-908d-460d-87ea-29cfa861d409 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302998462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.307660206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.308094228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.329194632Z" level=info msg="Created container 7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97: kube-system/kindnet-ztvbb/kindnet-cni" id=4484e614-7ad7-4e99-af06-cbcba3d7b876 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.329769456Z" level=info msg="Starting container: 7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97" id=1a65bc5a-d5bd-42a2-bc52-5ddc20fceb79 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.331703384Z" level=info msg="Started container" PID=1058 containerID=7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97 description=kube-system/kindnet-ztvbb/kindnet-cni id=1a65bc5a-d5bd-42a2-bc52-5ddc20fceb79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e34e7f1488246f31d65d40ed56a19488b09667c154265d401e51a4a1c4022717
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.333907456Z" level=info msg="Created container b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5: kube-system/kube-proxy-9mrbd/kube-proxy" id=ab764b96-908d-460d-87ea-29cfa861d409 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.33446356Z" level=info msg="Starting container: b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5" id=388fc65e-90e2-480b-b408-3e2c780d83ac name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.337697396Z" level=info msg="Started container" PID=1059 containerID=b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5 description=kube-system/kube-proxy-9mrbd/kube-proxy id=388fc65e-90e2-480b-b408-3e2c780d83ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ca8c9370d0bb3523d144b2cce06dd6d80fdb5301beb4a87fbf8469878c96f2a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b8cd81c4ecb02       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   4 seconds ago       Running             kube-proxy                1                   5ca8c9370d0bb       kube-proxy-9mrbd                            kube-system
	7b457b8d0d855       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   4 seconds ago       Running             kindnet-cni               1                   e34e7f1488246       kindnet-ztvbb                               kube-system
	63cadcc519eb2       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   7 seconds ago       Running             kube-controller-manager   1                   cb5a811c43c45       kube-controller-manager-newest-cni-734511   kube-system
	e33943bb495ce       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   7 seconds ago       Running             kube-scheduler            1                   f439211ad7bfe       kube-scheduler-newest-cni-734511            kube-system
	677bf72e8ae93       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   7 seconds ago       Running             kube-apiserver            1                   f607eb3ded5d8       kube-apiserver-newest-cni-734511            kube-system
	a5c272c972236       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   ca055491158a5       etcd-newest-cni-734511                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-734511
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-734511
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=newest-cni-734511
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_27_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:27:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-734511
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-734511
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                ac30e952-d18a-4d33-99ce-65bf90d321e1
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-734511                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-ztvbb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-734511             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-734511    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-9mrbd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-734511             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node newest-cni-734511 event: Registered Node newest-cni-734511 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-734511 event: Registered Node newest-cni-734511 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3] <==
	{"level":"info","ts":"2025-12-21T20:27:46.575157Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-21T20:27:46.575257Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575364Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575398Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575426Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575513Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:27:46.575567Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:27:47.465691Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:47.465742Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:47.465806Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:47.465824Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:27:47.465841Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.466476Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.466494Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:27:47.466508Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.466516Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.467112Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-734511 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-21T20:27:47.467120Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:27:47.467139Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:27:47.467410Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:27:47.467489Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:27:47.468446Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:27:47.468476Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:27:47.471854Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:27:47.471889Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:27:53 up  1:10,  0 user,  load average: 3.42, 3.76, 2.79
	Linux newest-cni-734511 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97] <==
	I1221 20:27:49.581517       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:27:49.581801       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1221 20:27:49.581941       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:27:49.581969       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:27:49.581998       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:27:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:27:49.691704       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:27:49.781034       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:27:49.781163       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:27:49.781509       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:27:50.082088       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:27:50.082120       1 metrics.go:72] Registering metrics
	I1221 20:27:50.082202       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412] <==
	I1221 20:27:48.403380       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 20:27:48.403537       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1221 20:27:48.403716       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:27:48.403791       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.403830       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.405832       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:27:48.405923       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1221 20:27:48.418578       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 20:27:48.424121       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 20:27:48.424173       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1221 20:27:48.431016       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.431032       1 policy_source.go:248] refreshing policies
	I1221 20:27:48.438560       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:27:48.669506       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:27:48.694652       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:27:48.710770       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:27:48.716736       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:27:48.723578       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:27:48.750624       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.20.45"}
	I1221 20:27:48.759852       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.3.134"}
	I1221 20:27:49.306055       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:27:51.886893       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:27:51.886947       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:27:52.036679       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:27:52.136454       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb] <==
	I1221 20:27:51.537961       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538437       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538459       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1221 20:27:51.538542       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538573       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538731       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538781       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538936       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538966       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538989       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539132       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539151       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539169       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539180       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539250       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539907       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539936       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.545610       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.552967       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:51.557499       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-734511"
	I1221 20:27:51.557564       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1221 20:27:51.640183       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.640205       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:27:51.640210       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:27:51.653912       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5] <==
	I1221 20:27:49.377138       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:27:49.462191       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:49.562711       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:49.562749       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1221 20:27:49.562864       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:27:49.580152       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:27:49.580216       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:27:49.585280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:27:49.586078       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:27:49.586128       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:49.588255       1 config.go:200] "Starting service config controller"
	I1221 20:27:49.588277       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:27:49.588308       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:27:49.588325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:27:49.588364       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:27:49.588375       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:27:49.588398       1 config.go:309] "Starting node config controller"
	I1221 20:27:49.588410       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:27:49.688475       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:27:49.688490       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:27:49.688526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:27:49.688620       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28] <==
	I1221 20:27:46.925658       1 serving.go:386] Generated self-signed cert in-memory
	I1221 20:27:48.358967       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:27:48.358999       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:48.366959       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1221 20:27:48.367074       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:48.367012       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:27:48.367111       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:48.367044       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 20:27:48.367203       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:48.367243       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:27:48.367279       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:27:48.467401       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.467462       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.467496       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.496358     680 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.501691     680 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-734511\" already exists" pod="kube-system/kube-scheduler-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.501726     680 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.506505     680 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-734511\" already exists" pod="kube-system/etcd-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.506539     680 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.511210     680 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-734511\" already exists" pod="kube-system/kube-apiserver-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.524158     680 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.524285     680 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.524326     680 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.525193     680 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.982039     680 apiserver.go:52] "Watching apiserver"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.986622     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-734511" containerName="kube-controller-manager"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: E1221 20:27:49.018953     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-734511" containerName="kube-apiserver"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: E1221 20:27:49.019055     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: E1221 20:27:49.019082     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-734511" containerName="etcd"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.086735     680 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.137984     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-xtables-lock\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138038     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-lib-modules\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138075     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/462d4133-ac15-436a-91fe-13e1ec9c1430-xtables-lock\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138216     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/462d4133-ac15-436a-91fe-13e1ec9c1430-lib-modules\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138282     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-cni-cfg\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:50 newest-cni-734511 kubelet[680]: E1221 20:27:50.024804     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:50 newest-cni-734511 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:50 newest-cni-734511 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:50 newest-cni-734511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734511 -n newest-cni-734511
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734511 -n newest-cni-734511: exit status 2 (313.221768ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-734511 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9: exit status 1 (59.674104ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jlczz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-bjdvk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-2lpm9" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-734511
helpers_test.go:244: (dbg) docker inspect newest-cni-734511:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca",
	        "Created": "2025-12-21T20:27:08.312566365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:27:39.952942318Z",
	            "FinishedAt": "2025-12-21T20:27:38.922949694Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/hostname",
	        "HostsPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/hosts",
	        "LogPath": "/var/lib/docker/containers/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca/f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca-json.log",
	        "Name": "/newest-cni-734511",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-734511:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-734511",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f11eda59f7a4ef16e058e6e06dca366913c9719fe0cdc2d648fcda177160cbca",
	                "LowerDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe5925a5294cbd7c0c17ec36e57dff2f746a0aa48cbe5d305abb047ecee8f350/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-734511",
	                "Source": "/var/lib/docker/volumes/newest-cni-734511/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-734511",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-734511",
	                "name.minikube.sigs.k8s.io": "newest-cni-734511",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d2fd62fdf1e67c72486733620259ef3e8e6a6ada105e62cfc532374fbb351cee",
	            "SandboxKey": "/var/run/docker/netns/d2fd62fdf1e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-734511": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14816134e98be2c6f9635a0cd5947ae7aa1c8333188fd4c39e01a9672f929d75",
	                    "EndpointID": "e6bc62d1bf054993a4a162d4506d5cd466701e4f5a86ec56ac04e72df7b571c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:5e:16:47:51:ae",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-734511",
	                        "f11eda59f7a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511: exit status 2 (307.272536ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-734511 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬────
─────────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼────
─────────────────┤
	│ pause   │ -p old-k8s-version-699289 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:26 UTC │                     │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ embed-certs-413073 image list --format=json                                                                                                                                                                                                        │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p embed-certs-413073 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                             │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ stop    │ -p newest-cni-734511 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p test-preload-dl-gcs-162834                                                                                                                                                                                                                      │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-github-984988 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                       │ test-preload-dl-github-984988     │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p newest-cni-734511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-832404 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                      │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-832404                                                                                                                                                                                                               │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ newest-cni-734511 image list --format=json                                                                                                                                                                                                         │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p newest-cni-734511 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴────
─────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:39.861418  366911 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:39.861689  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861699  366911 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:39.861716  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861952  366911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:39.862461  366911 out.go:368] Setting JSON to false
	I1221 20:27:39.863571  366911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4209,"bootTime":1766344651,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:39.863625  366911 start.go:143] virtualization: kvm guest
	I1221 20:27:39.865281  366911 out.go:179] * [test-preload-dl-gcs-cached-832404] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:39.866365  366911 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:39.866394  366911 notify.go:221] Checking for updates...
	I1221 20:27:39.868343  366911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:39.869547  366911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:39.870766  366911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:39.871777  366911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:39.872792  366911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:39.813750  366768 start.go:309] selected driver: docker
	I1221 20:27:39.813763  366768 start.go:928] validating driver "docker" against &{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.813865  366768 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:27:39.814431  366768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.876119  366768 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-21 20:27:39.8661201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.876540  366768 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:39.876591  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:39.876661  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:39.876724  366768 start.go:353] cluster config:
	{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.878915  366768 out.go:179] * Starting "newest-cni-734511" primary control-plane node in "newest-cni-734511" cluster
	I1221 20:27:39.879838  366768 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:39.880931  366768 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:39.881866  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:39.881912  366768 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:39.881925  366768 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:39.881974  366768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:39.882031  366768 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:39.882046  366768 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1221 20:27:39.882176  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:39.903361  366768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:39.903382  366768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:27:39.903398  366768 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:39.903430  366768 start.go:360] acquireMachinesLock for newest-cni-734511: {Name:mk73e51f1f54bba023ba70ceb2589863fd06b9dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:27:39.903492  366768 start.go:364] duration metric: took 34.632µs to acquireMachinesLock for "newest-cni-734511"
	I1221 20:27:39.903512  366768 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:27:39.903523  366768 fix.go:54] fixHost starting: 
	I1221 20:27:39.903753  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:39.923053  366768 fix.go:112] recreateIfNeeded on newest-cni-734511: state=Stopped err=<nil>
	W1221 20:27:39.923121  366768 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 20:27:39.874491  366911 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:39.874647  366911 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:39.874760  366911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:39.901645  366911 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:39.901739  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.958327  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-21 20:27:39.948377601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.958440  366911 docker.go:319] overlay module found
	I1221 20:27:39.959925  366911 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:39.961104  366911 start.go:309] selected driver: docker
	I1221 20:27:39.961123  366911 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:39.961304  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:40.019442  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-21 20:27:40.008652501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:40.019675  366911 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 20:27:40.020403  366911 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 20:27:40.020608  366911 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 20:27:40.023852  366911 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:40.025067  366911 cni.go:84] Creating CNI manager for ""
	I1221 20:27:40.025144  366911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:40.025159  366911 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:40.025380  366911 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-832404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-832404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I1221 20:27:40.026811  366911 out.go:179] * Starting "test-preload-dl-gcs-cached-832404" primary control-plane node in "test-preload-dl-gcs-cached-832404" cluster
	I1221 20:27:40.028161  366911 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:40.030043  366911 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:40.031142  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031190  366911 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:40.031200  366911 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:40.031280  366911 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:40.031312  366911 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:40.031323  366911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I1221 20:27:40.031455  366911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json ...
	I1221 20:27:40.031477  366911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json: {Name:mkf6696e0851cdf6856c1ee2548d89a9b19f171c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:40.031631  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031707  366911 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I1221 20:27:40.056706  366911 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:40.056732  366911 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1221 20:27:40.056815  366911 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1221 20:27:40.056829  366911 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1221 20:27:40.056833  366911 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1221 20:27:40.056842  366911 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1221 20:27:40.056853  366911 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:40.058381  366911 out.go:179] * Download complete!
	W1221 20:27:39.136122  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:41.635776  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:39.924855  366768 out.go:252] * Restarting existing docker container for "newest-cni-734511" ...
	I1221 20:27:39.924929  366768 cli_runner.go:164] Run: docker start newest-cni-734511
	I1221 20:27:40.181723  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:40.200215  366768 kic.go:430] container "newest-cni-734511" state is running.
	I1221 20:27:40.200630  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:40.221078  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:40.221314  366768 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:40.221390  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:40.240477  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:40.240777  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:40.240791  366768 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:40.241508  366768 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52326->127.0.0.1:33139: read: connection reset by peer
	I1221 20:27:43.377002  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.377031  366768 ubuntu.go:182] provisioning hostname "newest-cni-734511"
	I1221 20:27:43.377090  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.394956  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.395200  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.395215  366768 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-734511 && echo "newest-cni-734511" | sudo tee /etc/hostname
	I1221 20:27:43.540257  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.540338  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.558595  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.558789  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.558805  366768 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-734511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-734511/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-734511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:43.693472  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:43.693519  366768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:43.693547  366768 ubuntu.go:190] setting up certificates
	I1221 20:27:43.693561  366768 provision.go:84] configureAuth start
	I1221 20:27:43.693606  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:43.711122  366768 provision.go:143] copyHostCerts
	I1221 20:27:43.711190  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:43.711206  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:43.711307  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:43.711418  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:43.711428  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:43.711455  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:43.711526  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:43.711534  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:43.711556  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:43.711608  366768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.newest-cni-734511 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-734511]
	I1221 20:27:43.863689  366768 provision.go:177] copyRemoteCerts
	I1221 20:27:43.863758  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:43.863795  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.880942  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:43.976993  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:43.994083  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1221 20:27:44.010099  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:44.026129  366768 provision.go:87] duration metric: took 332.557611ms to configureAuth
	I1221 20:27:44.026157  366768 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:44.026344  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:44.026447  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.044140  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:44.044410  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:44.044442  366768 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:27:44.337510  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:44.337537  366768 machine.go:97] duration metric: took 4.116205242s to provisionDockerMachine
	I1221 20:27:44.337550  366768 start.go:293] postStartSetup for "newest-cni-734511" (driver="docker")
	I1221 20:27:44.337565  366768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:44.337645  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:44.337696  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.356430  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.456570  366768 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:44.460019  366768 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:44.460045  366768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:44.460055  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:44.460115  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:44.460217  366768 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:44.460366  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:44.467484  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:44.484578  366768 start.go:296] duration metric: took 147.011218ms for postStartSetup
	I1221 20:27:44.484652  366768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:44.484701  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.502940  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.597000  366768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:44.601372  366768 fix.go:56] duration metric: took 4.697843581s for fixHost
	I1221 20:27:44.601398  366768 start.go:83] releasing machines lock for "newest-cni-734511", held for 4.697894238s
	I1221 20:27:44.601460  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:44.619235  366768 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:44.619305  366768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:44.619325  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.619372  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.640849  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.641206  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.788588  366768 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:44.794953  366768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:44.828982  366768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:44.833576  366768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:44.833632  366768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:44.841303  366768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:27:44.841323  366768 start.go:496] detecting cgroup driver to use...
	I1221 20:27:44.841355  366768 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:44.841399  366768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:44.854483  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:44.866035  366768 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:44.866075  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:44.879803  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:44.891096  366768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:44.962811  366768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:45.036580  366768 docker.go:234] disabling docker service ...
	I1221 20:27:45.036655  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:45.049959  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:45.061658  366768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:45.143449  366768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:45.222903  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:45.237087  366768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:45.250978  366768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:45.251037  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.259700  366768 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:45.259758  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.268003  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.276177  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.284319  366768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:45.291742  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.299910  366768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.307415  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.315340  366768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:45.322121  366768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:45.328957  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.401093  366768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:45.538335  366768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:45.538418  366768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:45.542214  366768 start.go:564] Will wait 60s for crictl version
	I1221 20:27:45.542281  366768 ssh_runner.go:195] Run: which crictl
	I1221 20:27:45.545577  366768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:45.568875  366768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:45.568942  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.595166  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.623728  366768 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1221 20:27:45.624987  366768 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:45.644329  366768 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:45.649761  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.662664  366768 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1221 20:27:45.663704  366768 kubeadm.go:884] updating cluster {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:45.663826  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:45.663883  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.694292  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.694315  366768 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:45.694369  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.718991  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.719012  366768 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:45.719021  366768 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1221 20:27:45.719114  366768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-734511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:45.719176  366768 ssh_runner.go:195] Run: crio config
	I1221 20:27:45.762367  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:45.762384  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:45.762397  366768 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1221 20:27:45.762418  366768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-734511 NodeName:newest-cni-734511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:45.762543  366768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-734511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:45.762599  366768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1221 20:27:45.770445  366768 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:45.770499  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:45.778476  366768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:27:45.790329  366768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1221 20:27:45.801764  366768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 20:27:45.813017  366768 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:45.816383  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.825744  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.897847  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:45.922243  366768 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511 for IP: 192.168.76.2
	I1221 20:27:45.922261  366768 certs.go:195] generating shared ca certs ...
	I1221 20:27:45.922276  366768 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:45.922431  366768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:45.922536  366768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:45.922554  366768 certs.go:257] generating profile certs ...
	I1221 20:27:45.922657  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key
	I1221 20:27:45.922734  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303
	I1221 20:27:45.922785  366768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key
	I1221 20:27:45.922933  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:45.922989  366768 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:45.923004  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:45.923043  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:45.923080  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:45.923115  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:45.923174  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:45.923964  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:45.941766  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:45.959821  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:45.977641  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:45.999591  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1221 20:27:46.017180  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:46.033291  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:46.049616  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:46.065936  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:46.082176  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:46.100908  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:46.118404  366768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:46.130148  366768 ssh_runner.go:195] Run: openssl version
	I1221 20:27:46.135988  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.143205  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:46.150252  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153722  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153769  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.187692  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:46.195104  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.201979  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:46.209200  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212567  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212618  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.247457  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:46.254920  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.261949  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:46.268910  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272330  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272382  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.306863  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:46.313724  366768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:46.317164  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:27:46.350547  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:27:46.384130  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:27:46.422703  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:27:46.467027  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:27:46.517807  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:27:46.567421  366768 kubeadm.go:401] StartCluster: {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:46.567522  366768 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:46.567577  366768 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:46.602496  366768 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:46.602528  366768 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:46.602535  366768 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:46.602540  366768 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:46.602544  366768 cri.go:96] found id: ""
	I1221 20:27:46.602592  366768 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:27:46.614070  366768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:46Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:46.614136  366768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:46.621873  366768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:27:46.621908  366768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:27:46.621949  366768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:27:46.629767  366768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:27:46.630431  366768 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-734511" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.630721  366768 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-734511" cluster setting kubeconfig missing "newest-cni-734511" context setting]
	I1221 20:27:46.631272  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.632955  366768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:27:46.640752  366768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1221 20:27:46.640781  366768 kubeadm.go:602] duration metric: took 18.866801ms to restartPrimaryControlPlane
	I1221 20:27:46.640790  366768 kubeadm.go:403] duration metric: took 73.379872ms to StartCluster
	I1221 20:27:46.640811  366768 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.640881  366768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.641874  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.642101  366768 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:46.642274  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:46.642329  366768 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:46.642383  366768 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-734511"
	I1221 20:27:46.642399  366768 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-734511"
	W1221 20:27:46.642406  366768 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:27:46.642425  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642739  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.642780  366768 addons.go:70] Setting dashboard=true in profile "newest-cni-734511"
	I1221 20:27:46.642802  366768 addons.go:239] Setting addon dashboard=true in "newest-cni-734511"
	W1221 20:27:46.642810  366768 addons.go:248] addon dashboard should already be in state true
	I1221 20:27:46.642825  366768 addons.go:70] Setting default-storageclass=true in profile "newest-cni-734511"
	I1221 20:27:46.642836  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642853  366768 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-734511"
	I1221 20:27:46.643163  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.643341  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.644615  366768 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:46.646107  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:46.668529  366768 addons.go:239] Setting addon default-storageclass=true in "newest-cni-734511"
	W1221 20:27:46.668549  366768 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:27:46.668571  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.668906  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.669412  366768 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:27:46.669424  366768 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:46.670744  366768 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1221 20:27:43.636464  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:45.637045  355293 pod_ready.go:94] pod "coredns-66bc5c9577-bp67f" is "Ready"
	I1221 20:27:45.637079  355293 pod_ready.go:86] duration metric: took 31.005880117s for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.639371  355293 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.643368  355293 pod_ready.go:94] pod "etcd-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.643393  355293 pod_ready.go:86] duration metric: took 3.995822ms for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.645204  355293 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.649549  355293 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.649576  355293 pod_ready.go:86] duration metric: took 4.334095ms for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.651465  355293 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.835343  355293 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.835366  355293 pod_ready.go:86] duration metric: took 183.883765ms for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.035541  355293 pod_ready.go:83] waiting for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.435715  355293 pod_ready.go:94] pod "kube-proxy-w9lgb" is "Ready"
	I1221 20:27:46.435746  355293 pod_ready.go:86] duration metric: took 400.180233ms for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.634643  355293 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034660  355293 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:47.034685  355293 pod_ready.go:86] duration metric: took 400.019644ms for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034697  355293 pod_ready.go:40] duration metric: took 32.40680352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:47.076294  355293 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:27:47.077955  355293 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-766361" cluster and "default" namespace by default
	I1221 20:27:46.670728  366768 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.670797  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:46.670848  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.671763  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:27:46.671780  366768 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:27:46.671829  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.700977  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.702794  366768 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.702814  366768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:46.702867  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.708071  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.726576  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.783599  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:46.796337  366768 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:46.796401  366768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:46.809276  366768 api_server.go:72] duration metric: took 167.144497ms to wait for apiserver process to appear ...
	I1221 20:27:46.809302  366768 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:46.809324  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:46.817287  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.821194  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:27:46.821236  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:27:46.837316  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:27:46.837342  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:27:46.838461  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.852066  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:27:46.852094  366768 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:27:46.867040  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:27:46.867061  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:27:46.880590  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:27:46.880613  366768 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:27:46.893474  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:27:46.893500  366768 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:27:46.905440  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:27:46.905462  366768 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:27:46.917382  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:27:46.917402  366768 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:27:46.929133  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:46.929151  366768 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:27:46.941146  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:48.329199  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1221 20:27:48.329247  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1221 20:27:48.329271  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.340161  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.340244  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.809402  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.813323  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.813346  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.847081  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.029754993s)
	I1221 20:27:48.847159  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00866423s)
	I1221 20:27:48.847289  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.906109829s)
	I1221 20:27:48.850396  366768 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-734511 addons enable metrics-server
	
	I1221 20:27:48.857477  366768 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1221 20:27:48.858708  366768 addons.go:530] duration metric: took 2.216387065s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:27:49.309469  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.314167  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:49.314201  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:49.809466  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.813534  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1221 20:27:49.814524  366768 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:27:49.814550  366768 api_server.go:131] duration metric: took 3.005240792s to wait for apiserver health ...
	I1221 20:27:49.814561  366768 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:49.818217  366768 system_pods.go:59] 8 kube-system pods found
	I1221 20:27:49.818279  366768 system_pods.go:61] "coredns-7d764666f9-jlczz" [8571aecb-77d8-4d07-90b2-fd10aca80bcd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818296  366768 system_pods.go:61] "etcd-newest-cni-734511" [5f6a8b90-3b7d-433a-8e62-fc0be1f726a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:49.818307  366768 system_pods.go:61] "kindnet-ztvbb" [0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:27:49.818319  366768 system_pods.go:61] "kube-apiserver-newest-cni-734511" [d0ac5067-f06f-4fff-853f-483d61d3a345] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:49.818330  366768 system_pods.go:61] "kube-controller-manager-newest-cni-734511" [fcb485ed-488d-41fb-b94c-dd1321961ccd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:49.818340  366768 system_pods.go:61] "kube-proxy-9mrbd" [462d4133-ac15-436a-91fe-13e1ec9c1430] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:27:49.818346  366768 system_pods.go:61] "kube-scheduler-newest-cni-734511" [e0670313-ee97-46e9-9090-98628a7613e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:49.818353  366768 system_pods.go:61] "storage-provisioner" [5bfed1a9-5cd0-45a6-abf9-ae34c8f2ab35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818359  366768 system_pods.go:74] duration metric: took 3.791516ms to wait for pod list to return data ...
	I1221 20:27:49.818368  366768 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:49.820504  366768 default_sa.go:45] found service account: "default"
	I1221 20:27:49.820526  366768 default_sa.go:55] duration metric: took 2.152518ms for default service account to be created ...
	I1221 20:27:49.820542  366768 kubeadm.go:587] duration metric: took 3.178410939s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:49.820567  366768 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:49.822831  366768 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:49.822855  366768 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:49.822871  366768 node_conditions.go:105] duration metric: took 2.298304ms to run NodePressure ...
	I1221 20:27:49.822886  366768 start.go:242] waiting for startup goroutines ...
	I1221 20:27:49.822900  366768 start.go:247] waiting for cluster config update ...
	I1221 20:27:49.822919  366768 start.go:256] writing updated cluster config ...
	I1221 20:27:49.823160  366768 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:49.870266  366768 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:49.872014  366768 out.go:179] * Done! kubectl is now configured to use "newest-cni-734511" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.29327799Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-9mrbd/POD" id=c95a3ec7-2f70-4ed2-b4c9-d6a345954c70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.293344217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.294172486Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.295121027Z" level=info msg="Ran pod sandbox e34e7f1488246f31d65d40ed56a19488b09667c154265d401e51a4a1c4022717 with infra container: kube-system/kindnet-ztvbb/POD" id=8cab90ef-b03f-42f0-9e4a-56badcd300d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.296180568Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=bf95da90-7bd1-4c74-8207-1648d53fe56c name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.296253007Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c95a3ec7-2f70-4ed2-b4c9-d6a345954c70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.29708702Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=ea3a8da1-390a-49eb-b8f0-04493fd7d9a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.297865103Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.298154635Z" level=info msg="Creating container: kube-system/kindnet-ztvbb/kindnet-cni" id=4484e614-7ad7-4e99-af06-cbcba3d7b876 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.298274315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.298597125Z" level=info msg="Ran pod sandbox 5ca8c9370d0bb3523d144b2cce06dd6d80fdb5301beb4a87fbf8469878c96f2a with infra container: kube-system/kube-proxy-9mrbd/POD" id=c95a3ec7-2f70-4ed2-b4c9-d6a345954c70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.299546066Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=ced98bf2-85be-42b8-9bb4-71f3366e4bae name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.301541106Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=deb4ef2f-0aa2-4954-a359-6f9b591f51d9 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302014093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302636742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302901922Z" level=info msg="Creating container: kube-system/kube-proxy-9mrbd/kube-proxy" id=ab764b96-908d-460d-87ea-29cfa861d409 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.302998462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.307660206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.308094228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.329194632Z" level=info msg="Created container 7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97: kube-system/kindnet-ztvbb/kindnet-cni" id=4484e614-7ad7-4e99-af06-cbcba3d7b876 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.329769456Z" level=info msg="Starting container: 7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97" id=1a65bc5a-d5bd-42a2-bc52-5ddc20fceb79 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.331703384Z" level=info msg="Started container" PID=1058 containerID=7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97 description=kube-system/kindnet-ztvbb/kindnet-cni id=1a65bc5a-d5bd-42a2-bc52-5ddc20fceb79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e34e7f1488246f31d65d40ed56a19488b09667c154265d401e51a4a1c4022717
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.333907456Z" level=info msg="Created container b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5: kube-system/kube-proxy-9mrbd/kube-proxy" id=ab764b96-908d-460d-87ea-29cfa861d409 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.33446356Z" level=info msg="Starting container: b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5" id=388fc65e-90e2-480b-b408-3e2c780d83ac name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:49 newest-cni-734511 crio[525]: time="2025-12-21T20:27:49.337697396Z" level=info msg="Started container" PID=1059 containerID=b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5 description=kube-system/kube-proxy-9mrbd/kube-proxy id=388fc65e-90e2-480b-b408-3e2c780d83ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ca8c9370d0bb3523d144b2cce06dd6d80fdb5301beb4a87fbf8469878c96f2a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b8cd81c4ecb02       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   5 seconds ago       Running             kube-proxy                1                   5ca8c9370d0bb       kube-proxy-9mrbd                            kube-system
	7b457b8d0d855       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   e34e7f1488246       kindnet-ztvbb                               kube-system
	63cadcc519eb2       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   8 seconds ago       Running             kube-controller-manager   1                   cb5a811c43c45       kube-controller-manager-newest-cni-734511   kube-system
	e33943bb495ce       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   8 seconds ago       Running             kube-scheduler            1                   f439211ad7bfe       kube-scheduler-newest-cni-734511            kube-system
	677bf72e8ae93       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   8 seconds ago       Running             kube-apiserver            1                   f607eb3ded5d8       kube-apiserver-newest-cni-734511            kube-system
	a5c272c972236       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   ca055491158a5       etcd-newest-cni-734511                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-734511
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-734511
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=newest-cni-734511
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_27_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:27:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-734511
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 21 Dec 2025 20:27:48 +0000   Sun, 21 Dec 2025 20:27:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-734511
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                ac30e952-d18a-4d33-99ce-65bf90d321e1
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-734511                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-ztvbb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-734511             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-734511    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-9mrbd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-734511             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node newest-cni-734511 event: Registered Node newest-cni-734511 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-734511 event: Registered Node newest-cni-734511 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3] <==
	{"level":"info","ts":"2025-12-21T20:27:46.575157Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-21T20:27:46.575257Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575364Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575398Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575426Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-21T20:27:46.575513Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:27:46.575567Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-21T20:27:47.465691Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:47.465742Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:47.465806Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-21T20:27:47.465824Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:27:47.465841Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.466476Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.466494Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-21T20:27:47.466508Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.466516Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-21T20:27:47.467112Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-734511 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-21T20:27:47.467120Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:27:47.467139Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-21T20:27:47.467410Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-21T20:27:47.467489Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-21T20:27:47.468446Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:27:47.468476Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-21T20:27:47.471854Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-21T20:27:47.471889Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:27:55 up  1:10,  0 user,  load average: 3.42, 3.76, 2.79
	Linux newest-cni-734511 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b457b8d0d855a754baaa8792d5109b2b165bf3b3fbd7f9b898c79325fbd4d97] <==
	I1221 20:27:49.581517       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:27:49.581801       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1221 20:27:49.581941       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:27:49.581969       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:27:49.581998       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:27:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:27:49.691704       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:27:49.781034       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:27:49.781163       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:27:49.781509       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:27:50.082088       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:27:50.082120       1 metrics.go:72] Registering metrics
	I1221 20:27:50.082202       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412] <==
	I1221 20:27:48.403380       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 20:27:48.403537       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1221 20:27:48.403716       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:27:48.403791       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.403830       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.405832       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1221 20:27:48.405923       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1221 20:27:48.418578       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 20:27:48.424121       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 20:27:48.424173       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1221 20:27:48.431016       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.431032       1 policy_source.go:248] refreshing policies
	I1221 20:27:48.438560       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:27:48.669506       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:27:48.694652       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:27:48.710770       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:27:48.716736       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:27:48.723578       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:27:48.750624       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.20.45"}
	I1221 20:27:48.759852       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.3.134"}
	I1221 20:27:49.306055       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1221 20:27:51.886893       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:27:51.886947       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:27:52.036679       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1221 20:27:52.136454       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb] <==
	I1221 20:27:51.537961       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538437       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538459       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1221 20:27:51.538542       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538573       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538731       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538781       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538936       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538966       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.538989       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539132       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539151       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539169       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539180       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539250       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539907       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.539936       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.545610       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.552967       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:51.557499       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-734511"
	I1221 20:27:51.557564       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1221 20:27:51.640183       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:51.640205       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1221 20:27:51.640210       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1221 20:27:51.653912       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b8cd81c4ecb02986a716dd360674b57215cb6762554ceeae1ec407e93ddb8aa5] <==
	I1221 20:27:49.377138       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:27:49.462191       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:49.562711       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:49.562749       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1221 20:27:49.562864       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:27:49.580152       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:27:49.580216       1 server_linux.go:136] "Using iptables Proxier"
	I1221 20:27:49.585280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:27:49.586078       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1221 20:27:49.586128       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:49.588255       1 config.go:200] "Starting service config controller"
	I1221 20:27:49.588277       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:27:49.588308       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:27:49.588325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:27:49.588364       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:27:49.588375       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:27:49.588398       1 config.go:309] "Starting node config controller"
	I1221 20:27:49.588410       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:27:49.688475       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:27:49.688490       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1221 20:27:49.688526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:27:49.688620       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28] <==
	I1221 20:27:46.925658       1 serving.go:386] Generated self-signed cert in-memory
	I1221 20:27:48.358967       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1221 20:27:48.358999       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:48.366959       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1221 20:27:48.367074       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:48.367012       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:27:48.367111       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:48.367044       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1221 20:27:48.367203       1 shared_informer.go:370] "Waiting for caches to sync"
	I1221 20:27:48.367243       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:27:48.367279       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:27:48.467401       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.467462       1 shared_informer.go:377] "Caches are synced"
	I1221 20:27:48.467496       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.496358     680 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.501691     680 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-734511\" already exists" pod="kube-system/kube-scheduler-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.501726     680 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.506505     680 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-734511\" already exists" pod="kube-system/etcd-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.506539     680 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.511210     680 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-734511\" already exists" pod="kube-system/kube-apiserver-newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.524158     680 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.524285     680 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-734511"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.524326     680 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.525193     680 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: I1221 20:27:48.982039     680 apiserver.go:52] "Watching apiserver"
	Dec 21 20:27:48 newest-cni-734511 kubelet[680]: E1221 20:27:48.986622     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-734511" containerName="kube-controller-manager"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: E1221 20:27:49.018953     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-734511" containerName="kube-apiserver"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: E1221 20:27:49.019055     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: E1221 20:27:49.019082     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-734511" containerName="etcd"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.086735     680 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.137984     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-xtables-lock\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138038     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-lib-modules\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138075     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/462d4133-ac15-436a-91fe-13e1ec9c1430-xtables-lock\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138216     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/462d4133-ac15-436a-91fe-13e1ec9c1430-lib-modules\") pod \"kube-proxy-9mrbd\" (UID: \"462d4133-ac15-436a-91fe-13e1ec9c1430\") " pod="kube-system/kube-proxy-9mrbd"
	Dec 21 20:27:49 newest-cni-734511 kubelet[680]: I1221 20:27:49.138282     680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d-cni-cfg\") pod \"kindnet-ztvbb\" (UID: \"0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d\") " pod="kube-system/kindnet-ztvbb"
	Dec 21 20:27:50 newest-cni-734511 kubelet[680]: E1221 20:27:50.024804     680 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-734511" containerName="kube-scheduler"
	Dec 21 20:27:50 newest-cni-734511 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:50 newest-cni-734511 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:50 newest-cni-734511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734511 -n newest-cni-734511
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734511 -n newest-cni-734511: exit status 2 (310.380874ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-734511 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9: exit status 1 (57.652772ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jlczz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-bjdvk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-2lpm9" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-734511 describe pod coredns-7d764666f9-jlczz storage-provisioner dashboard-metrics-scraper-867fb5f87b-bjdvk kubernetes-dashboard-b84665fb8-2lpm9: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-766361 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-766361 --alsologtostderr -v=1: exit status 80 (1.677849354s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-766361 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:27:58.755768  371276 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:58.755889  371276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:58.755900  371276 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:58.755906  371276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:58.756094  371276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:58.756349  371276 out.go:368] Setting JSON to false
	I1221 20:27:58.756373  371276 mustload.go:66] Loading cluster: default-k8s-diff-port-766361
	I1221 20:27:58.756763  371276 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:58.757201  371276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-766361 --format={{.State.Status}}
	I1221 20:27:58.776126  371276 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:58.776477  371276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:58.828931  371276 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-21 20:27:58.819594471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:58.829614  371276 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766254259-22261/minikube-v1.37.0-1766254259-22261-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766254259-22261-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-766361 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantup
datenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1221 20:27:58.831254  371276 out.go:179] * Pausing node default-k8s-diff-port-766361 ... 
	I1221 20:27:58.832311  371276 host.go:66] Checking if "default-k8s-diff-port-766361" exists ...
	I1221 20:27:58.832574  371276 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:58.832608  371276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-766361
	I1221 20:27:58.849970  371276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/default-k8s-diff-port-766361/id_rsa Username:docker}
	I1221 20:27:58.944437  371276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:58.955832  371276 pause.go:52] kubelet running: true
	I1221 20:27:58.955891  371276 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:59.113598  371276 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:59.113703  371276 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:59.176449  371276 cri.go:96] found id: "91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0"
	I1221 20:27:59.176471  371276 cri.go:96] found id: "6562f43639a320e098d9e4ad843cc037d45453fa65cb6cb1e4248d06d8197488"
	I1221 20:27:59.176475  371276 cri.go:96] found id: "0c541ab1c15fd8214ad40db5481d004462ddeed2aeddecaf01bc82624ff4cf84"
	I1221 20:27:59.176478  371276 cri.go:96] found id: "12105efc4f2b781f722122e1b964d9ab68c8321dae8011e99c3d709752394fcb"
	I1221 20:27:59.176481  371276 cri.go:96] found id: "e6caa72f4d923f220f83a305f8088c750602dbeb5769494d0ffb6489592bbc58"
	I1221 20:27:59.176485  371276 cri.go:96] found id: "95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7"
	I1221 20:27:59.176488  371276 cri.go:96] found id: "bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6"
	I1221 20:27:59.176490  371276 cri.go:96] found id: "bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0"
	I1221 20:27:59.176493  371276 cri.go:96] found id: "7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f"
	I1221 20:27:59.176503  371276 cri.go:96] found id: "57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80"
	I1221 20:27:59.176506  371276 cri.go:96] found id: "ed1a2848594e0790b69aa5bd98a39232a7761c6729fca3b526d211ed609091f6"
	I1221 20:27:59.176509  371276 cri.go:96] found id: ""
	I1221 20:27:59.176550  371276 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:27:59.187925  371276 retry.go:84] will retry after 400ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:59Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:59.548546  371276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:27:59.560681  371276 pause.go:52] kubelet running: false
	I1221 20:27:59.560729  371276 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:27:59.695654  371276 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:27:59.695738  371276 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:27:59.759371  371276 cri.go:96] found id: "91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0"
	I1221 20:27:59.759392  371276 cri.go:96] found id: "6562f43639a320e098d9e4ad843cc037d45453fa65cb6cb1e4248d06d8197488"
	I1221 20:27:59.759396  371276 cri.go:96] found id: "0c541ab1c15fd8214ad40db5481d004462ddeed2aeddecaf01bc82624ff4cf84"
	I1221 20:27:59.759399  371276 cri.go:96] found id: "12105efc4f2b781f722122e1b964d9ab68c8321dae8011e99c3d709752394fcb"
	I1221 20:27:59.759402  371276 cri.go:96] found id: "e6caa72f4d923f220f83a305f8088c750602dbeb5769494d0ffb6489592bbc58"
	I1221 20:27:59.759405  371276 cri.go:96] found id: "95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7"
	I1221 20:27:59.759408  371276 cri.go:96] found id: "bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6"
	I1221 20:27:59.759411  371276 cri.go:96] found id: "bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0"
	I1221 20:27:59.759413  371276 cri.go:96] found id: "7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f"
	I1221 20:27:59.759419  371276 cri.go:96] found id: "57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80"
	I1221 20:27:59.759422  371276 cri.go:96] found id: "ed1a2848594e0790b69aa5bd98a39232a7761c6729fca3b526d211ed609091f6"
	I1221 20:27:59.759424  371276 cri.go:96] found id: ""
	I1221 20:27:59.759471  371276 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:28:00.150162  371276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:28:00.162478  371276 pause.go:52] kubelet running: false
	I1221 20:28:00.162542  371276 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1221 20:28:00.293513  371276 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1221 20:28:00.293610  371276 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1221 20:28:00.355917  371276 cri.go:96] found id: "91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0"
	I1221 20:28:00.355945  371276 cri.go:96] found id: "6562f43639a320e098d9e4ad843cc037d45453fa65cb6cb1e4248d06d8197488"
	I1221 20:28:00.355951  371276 cri.go:96] found id: "0c541ab1c15fd8214ad40db5481d004462ddeed2aeddecaf01bc82624ff4cf84"
	I1221 20:28:00.355955  371276 cri.go:96] found id: "12105efc4f2b781f722122e1b964d9ab68c8321dae8011e99c3d709752394fcb"
	I1221 20:28:00.355960  371276 cri.go:96] found id: "e6caa72f4d923f220f83a305f8088c750602dbeb5769494d0ffb6489592bbc58"
	I1221 20:28:00.355965  371276 cri.go:96] found id: "95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7"
	I1221 20:28:00.355970  371276 cri.go:96] found id: "bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6"
	I1221 20:28:00.355974  371276 cri.go:96] found id: "bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0"
	I1221 20:28:00.355979  371276 cri.go:96] found id: "7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f"
	I1221 20:28:00.356005  371276 cri.go:96] found id: "57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80"
	I1221 20:28:00.356014  371276 cri.go:96] found id: "ed1a2848594e0790b69aa5bd98a39232a7761c6729fca3b526d211ed609091f6"
	I1221 20:28:00.356019  371276 cri.go:96] found id: ""
	I1221 20:28:00.356057  371276 ssh_runner.go:195] Run: sudo runc list -f json
	I1221 20:28:00.368836  371276 out.go:203] 
	W1221 20:28:00.370040  371276 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:28:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:28:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1221 20:28:00.370085  371276 out.go:285] * 
	* 
	W1221 20:28:00.374089  371276 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 20:28:00.375231  371276 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-766361 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-766361
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-766361:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e",
	        "Created": "2025-12-21T20:25:56.399803234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:27:01.969153121Z",
	            "FinishedAt": "2025-12-21T20:27:00.892356964Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/hosts",
	        "LogPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e-json.log",
	        "Name": "/default-k8s-diff-port-766361",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-766361:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-766361",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e",
	                "LowerDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-766361",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-766361/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-766361",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-766361",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-766361",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e66b90cfaaf3ac3a40d72e25945bda055210415f1a80fb75f18fce3fd25735df",
	            "SandboxKey": "/var/run/docker/netns/e66b90cfaaf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-766361": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da966e5bad965057a3f23332d40d7f74bcb84482d07b5154dbfb77c723cfe0cd",
	                    "EndpointID": "3de00a681e765d24454a0e9032ade118293671adcb3e15e624b362726a3af34d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:e3:9d:cf:e8:d0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-766361",
	                        "7b1bfe9daca1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361: exit status 2 (310.189081ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-766361 logs -n 25
E1221 20:28:00.984893   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-766361 logs -n 25: (1.0050491s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬────
─────────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼────
─────────────────┤
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ embed-certs-413073 image list --format=json                                                                                                                                                                                                        │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p embed-certs-413073 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                             │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ stop    │ -p newest-cni-734511 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p test-preload-dl-gcs-162834                                                                                                                                                                                                                      │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-github-984988 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                       │ test-preload-dl-github-984988     │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p newest-cni-734511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-832404 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                      │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-832404                                                                                                                                                                                                               │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ newest-cni-734511 image list --format=json                                                                                                                                                                                                         │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p newest-cni-734511 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p newest-cni-734511                                                                                                                                                                                                                               │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p newest-cni-734511                                                                                                                                                                                                                               │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ default-k8s-diff-port-766361 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p default-k8s-diff-port-766361 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴────
─────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:39.861418  366911 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:39.861689  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861699  366911 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:39.861716  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861952  366911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:39.862461  366911 out.go:368] Setting JSON to false
	I1221 20:27:39.863571  366911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4209,"bootTime":1766344651,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:39.863625  366911 start.go:143] virtualization: kvm guest
	I1221 20:27:39.865281  366911 out.go:179] * [test-preload-dl-gcs-cached-832404] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:39.866365  366911 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:39.866394  366911 notify.go:221] Checking for updates...
	I1221 20:27:39.868343  366911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:39.869547  366911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:39.870766  366911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:39.871777  366911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:39.872792  366911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:39.813750  366768 start.go:309] selected driver: docker
	I1221 20:27:39.813763  366768 start.go:928] validating driver "docker" against &{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.813865  366768 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:27:39.814431  366768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.876119  366768 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-21 20:27:39.8661201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.876540  366768 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:39.876591  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:39.876661  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:39.876724  366768 start.go:353] cluster config:
	{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.878915  366768 out.go:179] * Starting "newest-cni-734511" primary control-plane node in "newest-cni-734511" cluster
	I1221 20:27:39.879838  366768 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:39.880931  366768 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:39.881866  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:39.881912  366768 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:39.881925  366768 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:39.881974  366768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:39.882031  366768 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:39.882046  366768 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1221 20:27:39.882176  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:39.903361  366768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:39.903382  366768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:27:39.903398  366768 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:39.903430  366768 start.go:360] acquireMachinesLock for newest-cni-734511: {Name:mk73e51f1f54bba023ba70ceb2589863fd06b9dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:27:39.903492  366768 start.go:364] duration metric: took 34.632µs to acquireMachinesLock for "newest-cni-734511"
	I1221 20:27:39.903512  366768 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:27:39.903523  366768 fix.go:54] fixHost starting: 
	I1221 20:27:39.903753  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:39.923053  366768 fix.go:112] recreateIfNeeded on newest-cni-734511: state=Stopped err=<nil>
	W1221 20:27:39.923121  366768 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 20:27:39.874491  366911 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:39.874647  366911 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:39.874760  366911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:39.901645  366911 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:39.901739  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.958327  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-21 20:27:39.948377601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.958440  366911 docker.go:319] overlay module found
	I1221 20:27:39.959925  366911 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:39.961104  366911 start.go:309] selected driver: docker
	I1221 20:27:39.961123  366911 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:39.961304  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:40.019442  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-21 20:27:40.008652501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:40.019675  366911 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 20:27:40.020403  366911 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 20:27:40.020608  366911 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 20:27:40.023852  366911 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:40.025067  366911 cni.go:84] Creating CNI manager for ""
	I1221 20:27:40.025144  366911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:40.025159  366911 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:40.025380  366911 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-832404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-832404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I1221 20:27:40.026811  366911 out.go:179] * Starting "test-preload-dl-gcs-cached-832404" primary control-plane node in "test-preload-dl-gcs-cached-832404" cluster
	I1221 20:27:40.028161  366911 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:40.030043  366911 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:40.031142  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031190  366911 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:40.031200  366911 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:40.031280  366911 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:40.031312  366911 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:40.031323  366911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I1221 20:27:40.031455  366911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json ...
	I1221 20:27:40.031477  366911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json: {Name:mkf6696e0851cdf6856c1ee2548d89a9b19f171c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:40.031631  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031707  366911 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I1221 20:27:40.056706  366911 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:40.056732  366911 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1221 20:27:40.056815  366911 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1221 20:27:40.056829  366911 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1221 20:27:40.056833  366911 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1221 20:27:40.056842  366911 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1221 20:27:40.056853  366911 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:40.058381  366911 out.go:179] * Download complete!
	W1221 20:27:39.136122  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:41.635776  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:39.924855  366768 out.go:252] * Restarting existing docker container for "newest-cni-734511" ...
	I1221 20:27:39.924929  366768 cli_runner.go:164] Run: docker start newest-cni-734511
	I1221 20:27:40.181723  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:40.200215  366768 kic.go:430] container "newest-cni-734511" state is running.
	I1221 20:27:40.200630  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:40.221078  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:40.221314  366768 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:40.221390  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:40.240477  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:40.240777  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:40.240791  366768 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:40.241508  366768 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52326->127.0.0.1:33139: read: connection reset by peer
	I1221 20:27:43.377002  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.377031  366768 ubuntu.go:182] provisioning hostname "newest-cni-734511"
	I1221 20:27:43.377090  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.394956  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.395200  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.395215  366768 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-734511 && echo "newest-cni-734511" | sudo tee /etc/hostname
	I1221 20:27:43.540257  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.540338  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.558595  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.558789  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.558805  366768 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-734511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-734511/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-734511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:43.693472  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:43.693519  366768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:43.693547  366768 ubuntu.go:190] setting up certificates
	I1221 20:27:43.693561  366768 provision.go:84] configureAuth start
	I1221 20:27:43.693606  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:43.711122  366768 provision.go:143] copyHostCerts
	I1221 20:27:43.711190  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:43.711206  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:43.711307  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:43.711418  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:43.711428  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:43.711455  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:43.711526  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:43.711534  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:43.711556  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:43.711608  366768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.newest-cni-734511 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-734511]
	I1221 20:27:43.863689  366768 provision.go:177] copyRemoteCerts
	I1221 20:27:43.863758  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:43.863795  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.880942  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:43.976993  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:43.994083  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1221 20:27:44.010099  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:44.026129  366768 provision.go:87] duration metric: took 332.557611ms to configureAuth
	I1221 20:27:44.026157  366768 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:44.026344  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:44.026447  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.044140  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:44.044410  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:44.044442  366768 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:27:44.337510  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:44.337537  366768 machine.go:97] duration metric: took 4.116205242s to provisionDockerMachine
	I1221 20:27:44.337550  366768 start.go:293] postStartSetup for "newest-cni-734511" (driver="docker")
	I1221 20:27:44.337565  366768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:44.337645  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:44.337696  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.356430  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.456570  366768 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:44.460019  366768 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:44.460045  366768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:44.460055  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:44.460115  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:44.460217  366768 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:44.460366  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:44.467484  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:44.484578  366768 start.go:296] duration metric: took 147.011218ms for postStartSetup
	I1221 20:27:44.484652  366768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:44.484701  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.502940  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.597000  366768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:44.601372  366768 fix.go:56] duration metric: took 4.697843581s for fixHost
	I1221 20:27:44.601398  366768 start.go:83] releasing machines lock for "newest-cni-734511", held for 4.697894238s
	I1221 20:27:44.601460  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:44.619235  366768 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:44.619305  366768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:44.619325  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.619372  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.640849  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.641206  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.788588  366768 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:44.794953  366768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:44.828982  366768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:44.833576  366768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:44.833632  366768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:44.841303  366768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:27:44.841323  366768 start.go:496] detecting cgroup driver to use...
	I1221 20:27:44.841355  366768 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:44.841399  366768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:44.854483  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:44.866035  366768 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:44.866075  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:44.879803  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:44.891096  366768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:44.962811  366768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:45.036580  366768 docker.go:234] disabling docker service ...
	I1221 20:27:45.036655  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:45.049959  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:45.061658  366768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:45.143449  366768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:45.222903  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:45.237087  366768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:45.250978  366768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:45.251037  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.259700  366768 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:45.259758  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.268003  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.276177  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.284319  366768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:45.291742  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.299910  366768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.307415  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.315340  366768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:45.322121  366768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:45.328957  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.401093  366768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:45.538335  366768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:45.538418  366768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:45.542214  366768 start.go:564] Will wait 60s for crictl version
	I1221 20:27:45.542281  366768 ssh_runner.go:195] Run: which crictl
	I1221 20:27:45.545577  366768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:45.568875  366768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:45.568942  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.595166  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.623728  366768 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1221 20:27:45.624987  366768 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:45.644329  366768 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:45.649761  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.662664  366768 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1221 20:27:45.663704  366768 kubeadm.go:884] updating cluster {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:45.663826  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:45.663883  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.694292  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.694315  366768 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:45.694369  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.718991  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.719012  366768 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:45.719021  366768 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1221 20:27:45.719114  366768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-734511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:45.719176  366768 ssh_runner.go:195] Run: crio config
	I1221 20:27:45.762367  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:45.762384  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:45.762397  366768 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1221 20:27:45.762418  366768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-734511 NodeName:newest-cni-734511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:45.762543  366768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-734511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:45.762599  366768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1221 20:27:45.770445  366768 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:45.770499  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:45.778476  366768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:27:45.790329  366768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1221 20:27:45.801764  366768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 20:27:45.813017  366768 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:45.816383  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.825744  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.897847  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:45.922243  366768 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511 for IP: 192.168.76.2
	I1221 20:27:45.922261  366768 certs.go:195] generating shared ca certs ...
	I1221 20:27:45.922276  366768 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:45.922431  366768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:45.922536  366768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:45.922554  366768 certs.go:257] generating profile certs ...
	I1221 20:27:45.922657  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key
	I1221 20:27:45.922734  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303
	I1221 20:27:45.922785  366768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key
	I1221 20:27:45.922933  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:45.922989  366768 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:45.923004  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:45.923043  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:45.923080  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:45.923115  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:45.923174  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:45.923964  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:45.941766  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:45.959821  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:45.977641  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:45.999591  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1221 20:27:46.017180  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:46.033291  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:46.049616  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:46.065936  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:46.082176  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:46.100908  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:46.118404  366768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:46.130148  366768 ssh_runner.go:195] Run: openssl version
	I1221 20:27:46.135988  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.143205  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:46.150252  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153722  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153769  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.187692  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:46.195104  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.201979  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:46.209200  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212567  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212618  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.247457  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:46.254920  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.261949  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:46.268910  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272330  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272382  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.306863  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:46.313724  366768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:46.317164  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:27:46.350547  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:27:46.384130  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:27:46.422703  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:27:46.467027  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:27:46.517807  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:27:46.567421  366768 kubeadm.go:401] StartCluster: {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:46.567522  366768 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:46.567577  366768 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:46.602496  366768 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:46.602528  366768 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:46.602535  366768 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:46.602540  366768 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:46.602544  366768 cri.go:96] found id: ""
	I1221 20:27:46.602592  366768 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:27:46.614070  366768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:46Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:46.614136  366768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:46.621873  366768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:27:46.621908  366768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:27:46.621949  366768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:27:46.629767  366768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:27:46.630431  366768 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-734511" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.630721  366768 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-734511" cluster setting kubeconfig missing "newest-cni-734511" context setting]
	I1221 20:27:46.631272  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.632955  366768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:27:46.640752  366768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1221 20:27:46.640781  366768 kubeadm.go:602] duration metric: took 18.866801ms to restartPrimaryControlPlane
	I1221 20:27:46.640790  366768 kubeadm.go:403] duration metric: took 73.379872ms to StartCluster
	I1221 20:27:46.640811  366768 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.640881  366768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.641874  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.642101  366768 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:46.642274  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:46.642329  366768 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:46.642383  366768 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-734511"
	I1221 20:27:46.642399  366768 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-734511"
	W1221 20:27:46.642406  366768 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:27:46.642425  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642739  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.642780  366768 addons.go:70] Setting dashboard=true in profile "newest-cni-734511"
	I1221 20:27:46.642802  366768 addons.go:239] Setting addon dashboard=true in "newest-cni-734511"
	W1221 20:27:46.642810  366768 addons.go:248] addon dashboard should already be in state true
	I1221 20:27:46.642825  366768 addons.go:70] Setting default-storageclass=true in profile "newest-cni-734511"
	I1221 20:27:46.642836  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642853  366768 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-734511"
	I1221 20:27:46.643163  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.643341  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.644615  366768 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:46.646107  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:46.668529  366768 addons.go:239] Setting addon default-storageclass=true in "newest-cni-734511"
	W1221 20:27:46.668549  366768 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:27:46.668571  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.668906  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.669412  366768 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:27:46.669424  366768 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:46.670744  366768 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1221 20:27:43.636464  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:45.637045  355293 pod_ready.go:94] pod "coredns-66bc5c9577-bp67f" is "Ready"
	I1221 20:27:45.637079  355293 pod_ready.go:86] duration metric: took 31.005880117s for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.639371  355293 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.643368  355293 pod_ready.go:94] pod "etcd-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.643393  355293 pod_ready.go:86] duration metric: took 3.995822ms for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.645204  355293 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.649549  355293 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.649576  355293 pod_ready.go:86] duration metric: took 4.334095ms for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.651465  355293 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.835343  355293 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.835366  355293 pod_ready.go:86] duration metric: took 183.883765ms for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.035541  355293 pod_ready.go:83] waiting for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.435715  355293 pod_ready.go:94] pod "kube-proxy-w9lgb" is "Ready"
	I1221 20:27:46.435746  355293 pod_ready.go:86] duration metric: took 400.180233ms for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.634643  355293 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034660  355293 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:47.034685  355293 pod_ready.go:86] duration metric: took 400.019644ms for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034697  355293 pod_ready.go:40] duration metric: took 32.40680352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:47.076294  355293 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:27:47.077955  355293 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-766361" cluster and "default" namespace by default
	I1221 20:27:46.670728  366768 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.670797  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:46.670848  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.671763  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:27:46.671780  366768 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:27:46.671829  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.700977  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.702794  366768 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.702814  366768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:46.702867  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.708071  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.726576  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.783599  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:46.796337  366768 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:46.796401  366768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:46.809276  366768 api_server.go:72] duration metric: took 167.144497ms to wait for apiserver process to appear ...
	I1221 20:27:46.809302  366768 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:46.809324  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:46.817287  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.821194  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:27:46.821236  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:27:46.837316  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:27:46.837342  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:27:46.838461  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.852066  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:27:46.852094  366768 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:27:46.867040  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:27:46.867061  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:27:46.880590  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:27:46.880613  366768 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:27:46.893474  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:27:46.893500  366768 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:27:46.905440  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:27:46.905462  366768 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:27:46.917382  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:27:46.917402  366768 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:27:46.929133  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:46.929151  366768 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:27:46.941146  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:48.329199  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1221 20:27:48.329247  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1221 20:27:48.329271  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.340161  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.340244  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.809402  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.813323  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.813346  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.847081  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.029754993s)
	I1221 20:27:48.847159  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00866423s)
	I1221 20:27:48.847289  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.906109829s)
	I1221 20:27:48.850396  366768 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-734511 addons enable metrics-server
	
	I1221 20:27:48.857477  366768 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1221 20:27:48.858708  366768 addons.go:530] duration metric: took 2.216387065s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:27:49.309469  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.314167  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:49.314201  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:49.809466  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.813534  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1221 20:27:49.814524  366768 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:27:49.814550  366768 api_server.go:131] duration metric: took 3.005240792s to wait for apiserver health ...
	I1221 20:27:49.814561  366768 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:49.818217  366768 system_pods.go:59] 8 kube-system pods found
	I1221 20:27:49.818279  366768 system_pods.go:61] "coredns-7d764666f9-jlczz" [8571aecb-77d8-4d07-90b2-fd10aca80bcd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818296  366768 system_pods.go:61] "etcd-newest-cni-734511" [5f6a8b90-3b7d-433a-8e62-fc0be1f726a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:49.818307  366768 system_pods.go:61] "kindnet-ztvbb" [0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:27:49.818319  366768 system_pods.go:61] "kube-apiserver-newest-cni-734511" [d0ac5067-f06f-4fff-853f-483d61d3a345] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:49.818330  366768 system_pods.go:61] "kube-controller-manager-newest-cni-734511" [fcb485ed-488d-41fb-b94c-dd1321961ccd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:49.818340  366768 system_pods.go:61] "kube-proxy-9mrbd" [462d4133-ac15-436a-91fe-13e1ec9c1430] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:27:49.818346  366768 system_pods.go:61] "kube-scheduler-newest-cni-734511" [e0670313-ee97-46e9-9090-98628a7613e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:49.818353  366768 system_pods.go:61] "storage-provisioner" [5bfed1a9-5cd0-45a6-abf9-ae34c8f2ab35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818359  366768 system_pods.go:74] duration metric: took 3.791516ms to wait for pod list to return data ...
	I1221 20:27:49.818368  366768 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:49.820504  366768 default_sa.go:45] found service account: "default"
	I1221 20:27:49.820526  366768 default_sa.go:55] duration metric: took 2.152518ms for default service account to be created ...
	I1221 20:27:49.820542  366768 kubeadm.go:587] duration metric: took 3.178410939s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:49.820567  366768 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:49.822831  366768 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:49.822855  366768 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:49.822871  366768 node_conditions.go:105] duration metric: took 2.298304ms to run NodePressure ...
	I1221 20:27:49.822886  366768 start.go:242] waiting for startup goroutines ...
	I1221 20:27:49.822900  366768 start.go:247] waiting for cluster config update ...
	I1221 20:27:49.822919  366768 start.go:256] writing updated cluster config ...
	I1221 20:27:49.823160  366768 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:49.870266  366768 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:49.872014  366768 out.go:179] * Done! kubectl is now configured to use "newest-cni-734511" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 20:27:23 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:23.941862668Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 21 20:27:23 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:23.94580697Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 21 20:27:23 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:23.945826534Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.092721491Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b762544-6f03-4e59-8f24-acc663d446d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.093741496Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=46be6827-6cf7-46cb-bcb6-140e924ead83 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.094832979Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper" id=7192d3fc-9628-46a9-9232-7359c252ee23 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.094976772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.101202634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.101665291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.126452251Z" level=info msg="Created container 57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper" id=7192d3fc-9628-46a9-9232-7359c252ee23 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.127036378Z" level=info msg="Starting container: 57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80" id=b14644de-ecd7-455a-a345-7b328b5da13c name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.12869025Z" level=info msg="Started container" PID=1769 containerID=57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper id=b14644de-ecd7-455a-a345-7b328b5da13c name=/runtime.v1.RuntimeService/StartContainer sandboxID=92dd1c0857b1b9886500ab3ffb07fbc9ce8720780d3c0072b3857a5fb5cfcbb6
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.212726428Z" level=info msg="Removing container: 394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb" id=6b8b2014-80d4-4857-8c82-656f4019709f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.222467508Z" level=info msg="Removed container 394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper" id=6b8b2014-80d4-4857-8c82-656f4019709f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.223627168Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=131a03e1-61e4-4c30-98ca-866145b10bde name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.224574085Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=60df9305-8842-4700-b122-f1ea56567b23 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.225545267Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e1b52d0b-337b-453b-b181-b2f4dda3f788 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.225703055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230271704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230463253Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7f4d3c8b0f7c071d7d2f731660037640792c327e154d1371e4a10441cf5c4ce6/merged/etc/passwd: no such file or directory"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230498073Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7f4d3c8b0f7c071d7d2f731660037640792c327e154d1371e4a10441cf5c4ce6/merged/etc/group: no such file or directory"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230772215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.258575185Z" level=info msg="Created container 91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0: kube-system/storage-provisioner/storage-provisioner" id=e1b52d0b-337b-453b-b181-b2f4dda3f788 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.25915535Z" level=info msg="Starting container: 91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0" id=a26f4bd5-5418-4e5a-a114-4ca054f3d3e3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.261346231Z" level=info msg="Started container" PID=1783 containerID=91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0 description=kube-system/storage-provisioner/storage-provisioner id=a26f4bd5-5418-4e5a-a114-4ca054f3d3e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bde1ad0d91ba03b41039ce833b7a4a4e848b1df6424de4573c5452134238e8d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	91d99edda8967       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   9bde1ad0d91ba       storage-provisioner                                    kube-system
	57a39e576411a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   92dd1c0857b1b       dashboard-metrics-scraper-6ffb444bf9-65f6x             kubernetes-dashboard
	ed1a2848594e0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   8a7ac83b87651       kubernetes-dashboard-855c9754f9-n9w5g                  kubernetes-dashboard
	6562f43639a32       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           47 seconds ago      Running             coredns                     0                   69dc90c1efad0       coredns-66bc5c9577-bp67f                               kube-system
	8565441180c29       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   898af142ec82b       busybox                                                default
	0c541ab1c15fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   9bde1ad0d91ba       storage-provisioner                                    kube-system
	12105efc4f2b7       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           47 seconds ago      Running             kube-proxy                  0                   c7abc9ce1baa2       kube-proxy-w9lgb                                       kube-system
	e6caa72f4d923       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           47 seconds ago      Running             kindnet-cni                 0                   bb97aa2b3bcf3       kindnet-td7vw                                          kube-system
	95eb61e08ac54       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           50 seconds ago      Running             kube-apiserver              0                   cf9d335a91a32       kube-apiserver-default-k8s-diff-port-766361            kube-system
	bc4bf9240c4aa       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           50 seconds ago      Running             etcd                        0                   5c93786db27c4       etcd-default-k8s-diff-port-766361                      kube-system
	bf48b58ae55f3       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           50 seconds ago      Running             kube-scheduler              0                   017ae0ffacfbd       kube-scheduler-default-k8s-diff-port-766361            kube-system
	7c08998468c34       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           50 seconds ago      Running             kube-controller-manager     0                   751ded6a00ac6       kube-controller-manager-default-k8s-diff-port-766361   kube-system
	
	
	==> coredns [6562f43639a320e098d9e4ad843cc037d45453fa65cb6cb1e4248d06d8197488] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58688 - 57030 "HINFO IN 4360920200132600657.7573900220501448968. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091659819s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-766361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-766361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=default-k8s-diff-port-766361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_26_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:26:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-766361
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-766361
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                fa186ebe-d952-42e9-84eb-564f086c9a9b
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-bp67f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-766361                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-td7vw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-default-k8s-diff-port-766361             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-766361    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-w9lgb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-default-k8s-diff-port-766361             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-65f6x              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n9w5g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  114s (x8 over 115s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 115s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 115s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node default-k8s-diff-port-766361 event: Registered Node default-k8s-diff-port-766361 in Controller
	  Normal  NodeReady                91s                  kubelet          Node default-k8s-diff-port-766361 status is now: NodeReady
	  Normal  Starting                 51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)    kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)    kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)    kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                  node-controller  Node default-k8s-diff-port-766361 event: Registered Node default-k8s-diff-port-766361 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6] <==
	{"level":"warn","ts":"2025-12-21T20:27:12.290067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.297315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.306838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.313486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.321305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.328344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.335531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.343355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.351085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.378330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.385164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.392511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.399551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.406016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.412989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.419604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.427038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.434842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.442503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.450640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.457760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.471852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.479944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.486082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.534918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:28:01 up  1:10,  0 user,  load average: 3.31, 3.73, 2.78
	Linux default-k8s-diff-port-766361 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e6caa72f4d923f220f83a305f8088c750602dbeb5769494d0ffb6489592bbc58] <==
	I1221 20:27:13.621995       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:27:13.714262       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1221 20:27:13.715658       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:27:13.718497       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:27:13.718539       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:27:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:27:13.924167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:27:13.924202       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:27:13.924214       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:27:14.013029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:27:14.324509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:27:14.324549       1 metrics.go:72] Registering metrics
	I1221 20:27:14.324640       1 controller.go:711] "Syncing nftables rules"
	I1221 20:27:23.924440       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:23.924492       1 main.go:301] handling current node
	I1221 20:27:33.927346       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:33.927398       1 main.go:301] handling current node
	I1221 20:27:43.924501       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:43.924532       1 main.go:301] handling current node
	I1221 20:27:53.926355       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:53.926393       1 main.go:301] handling current node
	
	
	==> kube-apiserver [95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7] <==
	I1221 20:27:13.029513       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 20:27:13.029519       1 cache.go:39] Caches are synced for autoregister controller
	I1221 20:27:13.029723       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1221 20:27:13.029945       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 20:27:13.030043       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1221 20:27:13.030082       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1221 20:27:13.030443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1221 20:27:13.030543       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:27:13.030610       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1221 20:27:13.036049       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:27:13.037354       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 20:27:13.046637       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:27:13.066325       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 20:27:13.249593       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:27:13.323721       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:27:13.355376       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:27:13.378896       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:27:13.386913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:27:13.442408       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.10.0"}
	I1221 20:27:13.461478       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.60.23"}
	I1221 20:27:13.940189       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:27:16.831275       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:27:16.881189       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:27:16.881189       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:27:16.933298       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f] <==
	I1221 20:27:16.378520       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1221 20:27:16.378521       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:27:16.378551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 20:27:16.378561       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 20:27:16.378680       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 20:27:16.378700       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1221 20:27:16.378980       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1221 20:27:16.378992       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 20:27:16.379048       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1221 20:27:16.379200       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1221 20:27:16.379311       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1221 20:27:16.379388       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1221 20:27:16.379413       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-766361"
	I1221 20:27:16.380008       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1221 20:27:16.380677       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 20:27:16.384178       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:27:16.384352       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1221 20:27:16.395344       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 20:27:16.395390       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 20:27:16.395418       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 20:27:16.395427       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 20:27:16.395433       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 20:27:16.396679       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:27:16.398994       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1221 20:27:16.408346       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [12105efc4f2b781f722122e1b964d9ab68c8321dae8011e99c3d709752394fcb] <==
	I1221 20:27:13.548662       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:27:13.611894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:27:13.712109       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:27:13.712162       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1221 20:27:13.712314       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:27:13.735687       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:27:13.735752       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:27:13.740828       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:27:13.741267       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:27:13.741308       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:13.742645       1 config.go:309] "Starting node config controller"
	I1221 20:27:13.742668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:27:13.742678       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:27:13.742714       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:27:13.742719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:27:13.742743       1 config.go:200] "Starting service config controller"
	I1221 20:27:13.742748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:27:13.742767       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:27:13.742776       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:27:13.842850       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:27:13.842867       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:27:13.842867       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0] <==
	I1221 20:27:11.311623       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:27:12.962600       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:27:12.962639       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:27:12.962667       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:27:12.962678       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:27:12.992187       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 20:27:12.992292       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:12.995392       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:27:12.995436       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:27:12.995855       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:27:12.995925       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:27:13.096069       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:27:13 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:13.246508     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678-xtables-lock\") pod \"kindnet-td7vw\" (UID: \"75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678\") " pod="kube-system/kindnet-td7vw"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170198     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/19812d2c-6bef-4834-9d61-fa7abe6c3083-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-65f6x\" (UID: \"19812d2c-6bef-4834-9d61-fa7abe6c3083\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170280     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a18d7a00-dbc1-44ab-936f-eb9fda84c23b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-n9w5g\" (UID: \"a18d7a00-dbc1-44ab-936f-eb9fda84c23b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9w5g"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170308     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7sr\" (UniqueName: \"kubernetes.io/projected/a18d7a00-dbc1-44ab-936f-eb9fda84c23b-kube-api-access-zx7sr\") pod \"kubernetes-dashboard-855c9754f9-n9w5g\" (UID: \"a18d7a00-dbc1-44ab-936f-eb9fda84c23b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9w5g"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170334     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs92q\" (UniqueName: \"kubernetes.io/projected/19812d2c-6bef-4834-9d61-fa7abe6c3083-kube-api-access-fs92q\") pod \"dashboard-metrics-scraper-6ffb444bf9-65f6x\" (UID: \"19812d2c-6bef-4834-9d61-fa7abe6c3083\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x"
	Dec 21 20:27:20 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:20.143660     732 scope.go:117] "RemoveContainer" containerID="64050056b60fb7c8b94970591bb2207a20e0740bb12c876f067094dcf21e00f0"
	Dec 21 20:27:21 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:21.154638     732 scope.go:117] "RemoveContainer" containerID="64050056b60fb7c8b94970591bb2207a20e0740bb12c876f067094dcf21e00f0"
	Dec 21 20:27:21 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:21.156042     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:21 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:21.156279     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:22 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:22.159599     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:22 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:22.160304     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:24 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:24.176976     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9w5g" podStartSLOduration=1.732714769 podStartE2EDuration="8.176951691s" podCreationTimestamp="2025-12-21 20:27:16 +0000 UTC" firstStartedPulling="2025-12-21 20:27:17.331209534 +0000 UTC m=+7.326332476" lastFinishedPulling="2025-12-21 20:27:23.775446456 +0000 UTC m=+13.770569398" observedRunningTime="2025-12-21 20:27:24.176584114 +0000 UTC m=+14.171707073" watchObservedRunningTime="2025-12-21 20:27:24.176951691 +0000 UTC m=+14.172074637"
	Dec 21 20:27:30 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:30.012655     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:30 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:30.012883     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:41.092081     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:41.211572     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:41.211831     732 scope.go:117] "RemoveContainer" containerID="57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:41.212053     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:44 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:44.223213     732 scope.go:117] "RemoveContainer" containerID="0c541ab1c15fd8214ad40db5481d004462ddeed2aeddecaf01bc82624ff4cf84"
	Dec 21 20:27:50 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:50.011963     732 scope.go:117] "RemoveContainer" containerID="57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80"
	Dec 21 20:27:50 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:50.012163     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: kubelet.service: Consumed 1.588s CPU time.
	
	
	==> kubernetes-dashboard [ed1a2848594e0790b69aa5bd98a39232a7761c6729fca3b526d211ed609091f6] <==
	2025/12/21 20:27:23 Starting overwatch
	2025/12/21 20:27:23 Using namespace: kubernetes-dashboard
	2025/12/21 20:27:23 Using in-cluster config to connect to apiserver
	2025/12/21 20:27:23 Using secret token for csrf signing
	2025/12/21 20:27:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:27:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:27:23 Successful initial request to the apiserver, version: v1.34.3
	2025/12/21 20:27:23 Generating JWE encryption key
	2025/12/21 20:27:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:27:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:27:23 Initializing JWE encryption key from synchronized object
	2025/12/21 20:27:23 Creating in-cluster Sidecar client
	2025/12/21 20:27:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:27:24 Serving insecurely on HTTP port: 9090
	2025/12/21 20:27:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0c541ab1c15fd8214ad40db5481d004462ddeed2aeddecaf01bc82624ff4cf84] <==
	I1221 20:27:13.500834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:27:43.505660       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0] <==
	I1221 20:27:44.274188       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:27:44.281798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:27:44.281833       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:27:44.283908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:47.739036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:51.999184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:55.598403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:58.651368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:28:01.673538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:28:01.678087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:28:01.678216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:28:01.678401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766361_b10bcb71-4483-4f94-9f25-5c591e44dec1!
	I1221 20:28:01.678340       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"577e970a-eb7c-428e-948b-c188b50d25b7", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-766361_b10bcb71-4483-4f94-9f25-5c591e44dec1 became leader
	W1221 20:28:01.680219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:28:01.683674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361: exit status 2 (318.414922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-766361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-766361
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-766361:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e",
	        "Created": "2025-12-21T20:25:56.399803234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-21T20:27:01.969153121Z",
	            "FinishedAt": "2025-12-21T20:27:00.892356964Z"
	        },
	        "Image": "sha256:172e872745980801c94284f4f07e825c00d6159d09e87254d8b524494a7b9a17",
	        "ResolvConfPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/hosts",
	        "LogPath": "/var/lib/docker/containers/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e/7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e-json.log",
	        "Name": "/default-k8s-diff-port-766361",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-766361:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-766361",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b1bfe9daca1a747d7a49c725354df1a5864b97203481a60c5901a74f7debb3e",
	                "LowerDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c-init/diff:/var/lib/docker/overlay2/39277325850ad141cf78d64dfc224aa4df3f2a10ca96b4ef4f8688ab6604e765/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47b51f01261b2acd9c998fde2abe8d584d4f79ad9a71da8c8150a371babbc68c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-766361",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-766361/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-766361",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-766361",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-766361",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e66b90cfaaf3ac3a40d72e25945bda055210415f1a80fb75f18fce3fd25735df",
	            "SandboxKey": "/var/run/docker/netns/e66b90cfaaf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-766361": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da966e5bad965057a3f23332d40d7f74bcb84482d07b5154dbfb77c723cfe0cd",
	                    "EndpointID": "3de00a681e765d24454a0e9032ade118293671adcb3e15e624b362726a3af34d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:e3:9d:cf:e8:d0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-766361",
	                        "7b1bfe9daca1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361: exit status 2 (323.253245ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-766361 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-766361 logs -n 25: (1.02182652s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬────
─────────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼────
─────────────────┤
	│ delete  │ -p old-k8s-version-699289                                                                                                                                                                                                                          │ old-k8s-version-699289            │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ no-preload-328404 image list --format=json                                                                                                                                                                                                         │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p no-preload-328404 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ embed-certs-413073 image list --format=json                                                                                                                                                                                                        │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p embed-certs-413073 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-734511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p no-preload-328404                                                                                                                                                                                                                               │ no-preload-328404                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                             │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ stop    │ -p newest-cni-734511 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p test-preload-dl-gcs-162834                                                                                                                                                                                                                      │ test-preload-dl-gcs-162834        │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-github-984988 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                       │ test-preload-dl-github-984988     │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p embed-certs-413073                                                                                                                                                                                                                              │ embed-certs-413073                │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ addons  │ enable dashboard -p newest-cni-734511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-832404 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                      │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-832404                                                                                                                                                                                                               │ test-preload-dl-gcs-cached-832404 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ newest-cni-734511 image list --format=json                                                                                                                                                                                                         │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p newest-cni-734511 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	│ delete  │ -p newest-cni-734511                                                                                                                                                                                                                               │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ delete  │ -p newest-cni-734511                                                                                                                                                                                                                               │ newest-cni-734511                 │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ image   │ default-k8s-diff-port-766361 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │ 21 Dec 25 20:27 UTC │
	│ pause   │ -p default-k8s-diff-port-766361 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-766361      │ jenkins │ v1.37.0 │ 21 Dec 25 20:27 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴────
─────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 20:27:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 20:27:39.861418  366911 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:27:39.861689  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861699  366911 out.go:374] Setting ErrFile to fd 2...
	I1221 20:27:39.861716  366911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:27:39.861952  366911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:27:39.862461  366911 out.go:368] Setting JSON to false
	I1221 20:27:39.863571  366911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4209,"bootTime":1766344651,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:27:39.863625  366911 start.go:143] virtualization: kvm guest
	I1221 20:27:39.865281  366911 out.go:179] * [test-preload-dl-gcs-cached-832404] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:27:39.866365  366911 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:27:39.866394  366911 notify.go:221] Checking for updates...
	I1221 20:27:39.868343  366911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:27:39.869547  366911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:39.870766  366911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:27:39.871777  366911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:27:39.872792  366911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:27:39.813750  366768 start.go:309] selected driver: docker
	I1221 20:27:39.813763  366768 start.go:928] validating driver "docker" against &{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.813865  366768 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:27:39.814431  366768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.876119  366768 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-21 20:27:39.8661201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.876540  366768 start_flags.go:1014] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:39.876591  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:39.876661  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:39.876724  366768 start.go:353] cluster config:
	{Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:39.878915  366768 out.go:179] * Starting "newest-cni-734511" primary control-plane node in "newest-cni-734511" cluster
	I1221 20:27:39.879838  366768 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:39.880931  366768 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:39.881866  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:39.881912  366768 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:39.881925  366768 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:39.881974  366768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:39.882031  366768 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:39.882046  366768 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1221 20:27:39.882176  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:39.903361  366768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:39.903382  366768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
	I1221 20:27:39.903398  366768 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:39.903430  366768 start.go:360] acquireMachinesLock for newest-cni-734511: {Name:mk73e51f1f54bba023ba70ceb2589863fd06b9dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 20:27:39.903492  366768 start.go:364] duration metric: took 34.632µs to acquireMachinesLock for "newest-cni-734511"
	I1221 20:27:39.903512  366768 start.go:96] Skipping create...Using existing machine configuration
	I1221 20:27:39.903523  366768 fix.go:54] fixHost starting: 
	I1221 20:27:39.903753  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:39.923053  366768 fix.go:112] recreateIfNeeded on newest-cni-734511: state=Stopped err=<nil>
	W1221 20:27:39.923121  366768 fix.go:138] unexpected machine state, will restart: <nil>
	I1221 20:27:39.874491  366911 config.go:182] Loaded profile config "default-k8s-diff-port-766361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:27:39.874647  366911 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:39.874760  366911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:27:39.901645  366911 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:27:39.901739  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:39.958327  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-21 20:27:39.948377601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:39.958440  366911 docker.go:319] overlay module found
	I1221 20:27:39.959925  366911 out.go:179] * Using the docker driver based on user configuration
	I1221 20:27:39.961104  366911 start.go:309] selected driver: docker
	I1221 20:27:39.961123  366911 start.go:928] validating driver "docker" against <nil>
	I1221 20:27:39.961304  366911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:27:40.019442  366911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-21 20:27:40.008652501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:27:40.019675  366911 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 20:27:40.020403  366911 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 20:27:40.020608  366911 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 20:27:40.023852  366911 out.go:179] * Using Docker driver with root privileges
	I1221 20:27:40.025067  366911 cni.go:84] Creating CNI manager for ""
	I1221 20:27:40.025144  366911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:40.025159  366911 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 20:27:40.025380  366911 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-832404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-832404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I1221 20:27:40.026811  366911 out.go:179] * Starting "test-preload-dl-gcs-cached-832404" primary control-plane node in "test-preload-dl-gcs-cached-832404" cluster
	I1221 20:27:40.028161  366911 cache.go:134] Beginning downloading kic base image for docker with crio
	I1221 20:27:40.030043  366911 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
	I1221 20:27:40.031142  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031190  366911 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1221 20:27:40.031200  366911 cache.go:65] Caching tarball of preloaded images
	I1221 20:27:40.031280  366911 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
	I1221 20:27:40.031312  366911 preload.go:251] Found /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 20:27:40.031323  366911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I1221 20:27:40.031455  366911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json ...
	I1221 20:27:40.031477  366911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/test-preload-dl-gcs-cached-832404/config.json: {Name:mkf6696e0851cdf6856c1ee2548d89a9b19f171c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:40.031631  366911 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I1221 20:27:40.031707  366911 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I1221 20:27:40.056706  366911 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
	I1221 20:27:40.056732  366911 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 to local cache
	I1221 20:27:40.056815  366911 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory
	I1221 20:27:40.056829  366911 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local cache directory, skipping pull
	I1221 20:27:40.056833  366911 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in cache, skipping pull
	I1221 20:27:40.056842  366911 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 as a tarball
	I1221 20:27:40.056853  366911 cache.go:243] Successfully downloaded all kic artifacts
	I1221 20:27:40.058381  366911 out.go:179] * Download complete!
	W1221 20:27:39.136122  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	W1221 20:27:41.635776  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:39.924855  366768 out.go:252] * Restarting existing docker container for "newest-cni-734511" ...
	I1221 20:27:39.924929  366768 cli_runner.go:164] Run: docker start newest-cni-734511
	I1221 20:27:40.181723  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:40.200215  366768 kic.go:430] container "newest-cni-734511" state is running.
	I1221 20:27:40.200630  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:40.221078  366768 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/config.json ...
	I1221 20:27:40.221314  366768 machine.go:94] provisionDockerMachine start ...
	I1221 20:27:40.221390  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:40.240477  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:40.240777  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:40.240791  366768 main.go:144] libmachine: About to run SSH command:
	hostname
	I1221 20:27:40.241508  366768 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52326->127.0.0.1:33139: read: connection reset by peer
	I1221 20:27:43.377002  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.377031  366768 ubuntu.go:182] provisioning hostname "newest-cni-734511"
	I1221 20:27:43.377090  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.394956  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.395200  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.395215  366768 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-734511 && echo "newest-cni-734511" | sudo tee /etc/hostname
	I1221 20:27:43.540257  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-734511
	
	I1221 20:27:43.540338  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.558595  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:43.558789  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:43.558805  366768 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-734511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-734511/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-734511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 20:27:43.693472  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1221 20:27:43.693519  366768 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-9159/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-9159/.minikube}
	I1221 20:27:43.693547  366768 ubuntu.go:190] setting up certificates
	I1221 20:27:43.693561  366768 provision.go:84] configureAuth start
	I1221 20:27:43.693606  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:43.711122  366768 provision.go:143] copyHostCerts
	I1221 20:27:43.711190  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem, removing ...
	I1221 20:27:43.711206  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem
	I1221 20:27:43.711307  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/ca.pem (1078 bytes)
	I1221 20:27:43.711418  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem, removing ...
	I1221 20:27:43.711428  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem
	I1221 20:27:43.711455  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/cert.pem (1123 bytes)
	I1221 20:27:43.711526  366768 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem, removing ...
	I1221 20:27:43.711534  366768 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem
	I1221 20:27:43.711556  366768 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-9159/.minikube/key.pem (1675 bytes)
	I1221 20:27:43.711608  366768 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem org=jenkins.newest-cni-734511 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-734511]
	I1221 20:27:43.863689  366768 provision.go:177] copyRemoteCerts
	I1221 20:27:43.863758  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 20:27:43.863795  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:43.880942  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:43.976993  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 20:27:43.994083  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1221 20:27:44.010099  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 20:27:44.026129  366768 provision.go:87] duration metric: took 332.557611ms to configureAuth
	I1221 20:27:44.026157  366768 ubuntu.go:206] setting minikube options for container-runtime
	I1221 20:27:44.026344  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:44.026447  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.044140  366768 main.go:144] libmachine: Using SSH client type: native
	I1221 20:27:44.044410  366768 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1221 20:27:44.044442  366768 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 20:27:44.337510  366768 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 20:27:44.337537  366768 machine.go:97] duration metric: took 4.116205242s to provisionDockerMachine
	I1221 20:27:44.337550  366768 start.go:293] postStartSetup for "newest-cni-734511" (driver="docker")
	I1221 20:27:44.337565  366768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 20:27:44.337645  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 20:27:44.337696  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.356430  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.456570  366768 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 20:27:44.460019  366768 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 20:27:44.460045  366768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1221 20:27:44.460055  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/addons for local assets ...
	I1221 20:27:44.460115  366768 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-9159/.minikube/files for local assets ...
	I1221 20:27:44.460217  366768 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem -> 127112.pem in /etc/ssl/certs
	I1221 20:27:44.460366  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 20:27:44.467484  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:44.484578  366768 start.go:296] duration metric: took 147.011218ms for postStartSetup
	I1221 20:27:44.484652  366768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:27:44.484701  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.502940  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.597000  366768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 20:27:44.601372  366768 fix.go:56] duration metric: took 4.697843581s for fixHost
	I1221 20:27:44.601398  366768 start.go:83] releasing machines lock for "newest-cni-734511", held for 4.697894238s
	I1221 20:27:44.601460  366768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-734511
	I1221 20:27:44.619235  366768 ssh_runner.go:195] Run: cat /version.json
	I1221 20:27:44.619305  366768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 20:27:44.619325  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.619372  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:44.640849  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.641206  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:44.788588  366768 ssh_runner.go:195] Run: systemctl --version
	I1221 20:27:44.794953  366768 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 20:27:44.828982  366768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1221 20:27:44.833576  366768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1221 20:27:44.833632  366768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 20:27:44.841303  366768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1221 20:27:44.841323  366768 start.go:496] detecting cgroup driver to use...
	I1221 20:27:44.841355  366768 detect.go:190] detected "systemd" cgroup driver on host os
	I1221 20:27:44.841399  366768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 20:27:44.854483  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 20:27:44.866035  366768 docker.go:218] disabling cri-docker service (if available) ...
	I1221 20:27:44.866075  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 20:27:44.879803  366768 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 20:27:44.891096  366768 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 20:27:44.962811  366768 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 20:27:45.036580  366768 docker.go:234] disabling docker service ...
	I1221 20:27:45.036655  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 20:27:45.049959  366768 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 20:27:45.061658  366768 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 20:27:45.143449  366768 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 20:27:45.222903  366768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 20:27:45.237087  366768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 20:27:45.250978  366768 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1221 20:27:45.251037  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.259700  366768 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1221 20:27:45.259758  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.268003  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.276177  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.284319  366768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 20:27:45.291742  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.299910  366768 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.307415  366768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 20:27:45.315340  366768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 20:27:45.322121  366768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 20:27:45.328957  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.401093  366768 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 20:27:45.538335  366768 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 20:27:45.538418  366768 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 20:27:45.542214  366768 start.go:564] Will wait 60s for crictl version
	I1221 20:27:45.542281  366768 ssh_runner.go:195] Run: which crictl
	I1221 20:27:45.545577  366768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1221 20:27:45.568875  366768 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1221 20:27:45.568942  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.595166  366768 ssh_runner.go:195] Run: crio --version
	I1221 20:27:45.623728  366768 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1221 20:27:45.624987  366768 cli_runner.go:164] Run: docker network inspect newest-cni-734511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 20:27:45.644329  366768 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1221 20:27:45.649761  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.662664  366768 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1221 20:27:45.663704  366768 kubeadm.go:884] updating cluster {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1221 20:27:45.663826  366768 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1221 20:27:45.663883  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.694292  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.694315  366768 crio.go:433] Images already preloaded, skipping extraction
	I1221 20:27:45.694369  366768 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 20:27:45.718991  366768 crio.go:514] all images are preloaded for cri-o runtime.
	I1221 20:27:45.719012  366768 cache_images.go:86] Images are preloaded, skipping loading
	I1221 20:27:45.719021  366768 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1221 20:27:45.719114  366768 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-734511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1221 20:27:45.719176  366768 ssh_runner.go:195] Run: crio config
	I1221 20:27:45.762367  366768 cni.go:84] Creating CNI manager for ""
	I1221 20:27:45.762384  366768 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 20:27:45.762397  366768 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1221 20:27:45.762418  366768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-734511 NodeName:newest-cni-734511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 20:27:45.762543  366768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-734511"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 20:27:45.762599  366768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1221 20:27:45.770445  366768 binaries.go:51] Found k8s binaries, skipping transfer
	I1221 20:27:45.770499  366768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 20:27:45.778476  366768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1221 20:27:45.790329  366768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1221 20:27:45.801764  366768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1221 20:27:45.813017  366768 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1221 20:27:45.816383  366768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 20:27:45.825744  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:45.897847  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:45.922243  366768 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511 for IP: 192.168.76.2
	I1221 20:27:45.922261  366768 certs.go:195] generating shared ca certs ...
	I1221 20:27:45.922276  366768 certs.go:227] acquiring lock for ca certs: {Name:mkd575e77f99c735595db1aac2f2d1fd448362be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:45.922431  366768 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key
	I1221 20:27:45.922536  366768 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key
	I1221 20:27:45.922554  366768 certs.go:257] generating profile certs ...
	I1221 20:27:45.922657  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/client.key
	I1221 20:27:45.922734  366768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key.cbe81303
	I1221 20:27:45.922785  366768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key
	I1221 20:27:45.922933  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem (1338 bytes)
	W1221 20:27:45.922989  366768 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711_empty.pem, impossibly tiny 0 bytes
	I1221 20:27:45.923004  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca-key.pem (1675 bytes)
	I1221 20:27:45.923043  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/ca.pem (1078 bytes)
	I1221 20:27:45.923080  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/cert.pem (1123 bytes)
	I1221 20:27:45.923115  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/certs/key.pem (1675 bytes)
	I1221 20:27:45.923174  366768 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem (1708 bytes)
	I1221 20:27:45.923964  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 20:27:45.941766  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 20:27:45.959821  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 20:27:45.977641  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1221 20:27:45.999591  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1221 20:27:46.017180  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 20:27:46.033291  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 20:27:46.049616  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/newest-cni-734511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 20:27:46.065936  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 20:27:46.082176  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/certs/12711.pem --> /usr/share/ca-certificates/12711.pem (1338 bytes)
	I1221 20:27:46.100908  366768 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/ssl/certs/127112.pem --> /usr/share/ca-certificates/127112.pem (1708 bytes)
	I1221 20:27:46.118404  366768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1221 20:27:46.130148  366768 ssh_runner.go:195] Run: openssl version
	I1221 20:27:46.135988  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.143205  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1221 20:27:46.150252  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153722  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 21 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.153769  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 20:27:46.187692  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1221 20:27:46.195104  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.201979  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12711.pem /etc/ssl/certs/12711.pem
	I1221 20:27:46.209200  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212567  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 21 19:54 /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.212618  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12711.pem
	I1221 20:27:46.247457  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1221 20:27:46.254920  366768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.261949  366768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/127112.pem /etc/ssl/certs/127112.pem
	I1221 20:27:46.268910  366768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272330  366768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 21 19:54 /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.272382  366768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/127112.pem
	I1221 20:27:46.306863  366768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1221 20:27:46.313724  366768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1221 20:27:46.317164  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1221 20:27:46.350547  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1221 20:27:46.384130  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1221 20:27:46.422703  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1221 20:27:46.467027  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1221 20:27:46.517807  366768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1221 20:27:46.567421  366768 kubeadm.go:401] StartCluster: {Name:newest-cni-734511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-734511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 20:27:46.567522  366768 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 20:27:46.567577  366768 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 20:27:46.602496  366768 cri.go:96] found id: "63cadcc519eb22974af4ce38b549824bc7af808adeea58b242a4b0873a6751bb"
	I1221 20:27:46.602528  366768 cri.go:96] found id: "e33943bb495ce1912e4005ea3567d77593276166c1fd6f4b6aa7b8dfa099bd28"
	I1221 20:27:46.602535  366768 cri.go:96] found id: "677bf72e8ae93eeb068898d553b913b3fd50c91ff93f621623bc3cdb5005a412"
	I1221 20:27:46.602540  366768 cri.go:96] found id: "a5c272c972236f61d6f84db735dfb3c0b9854863ece820a63d052399e20e26d3"
	I1221 20:27:46.602544  366768 cri.go:96] found id: ""
	I1221 20:27:46.602592  366768 ssh_runner.go:195] Run: sudo runc list -f json
	W1221 20:27:46.614070  366768 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T20:27:46Z" level=error msg="open /run/runc: no such file or directory"
	I1221 20:27:46.614136  366768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 20:27:46.621873  366768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1221 20:27:46.621908  366768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1221 20:27:46.621949  366768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1221 20:27:46.629767  366768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:27:46.630431  366768 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-734511" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.630721  366768 kubeconfig.go:62] /home/jenkins/minikube-integration/22179-9159/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-734511" cluster setting kubeconfig missing "newest-cni-734511" context setting]
	I1221 20:27:46.631272  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.632955  366768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1221 20:27:46.640752  366768 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1221 20:27:46.640781  366768 kubeadm.go:602] duration metric: took 18.866801ms to restartPrimaryControlPlane
	I1221 20:27:46.640790  366768 kubeadm.go:403] duration metric: took 73.379872ms to StartCluster
	I1221 20:27:46.640811  366768 settings.go:142] acquiring lock: {Name:mk249f074042de551a13e8c83713d6ef98f54b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.640881  366768 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:27:46.641874  366768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-9159/kubeconfig: {Name:mk65a31a9c89842c59018c8e283bdb481b82a9e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 20:27:46.642101  366768 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 20:27:46.642274  366768 config.go:182] Loaded profile config "newest-cni-734511": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:27:46.642329  366768 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1221 20:27:46.642383  366768 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-734511"
	I1221 20:27:46.642399  366768 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-734511"
	W1221 20:27:46.642406  366768 addons.go:248] addon storage-provisioner should already be in state true
	I1221 20:27:46.642425  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642739  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.642780  366768 addons.go:70] Setting dashboard=true in profile "newest-cni-734511"
	I1221 20:27:46.642802  366768 addons.go:239] Setting addon dashboard=true in "newest-cni-734511"
	W1221 20:27:46.642810  366768 addons.go:248] addon dashboard should already be in state true
	I1221 20:27:46.642825  366768 addons.go:70] Setting default-storageclass=true in profile "newest-cni-734511"
	I1221 20:27:46.642836  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.642853  366768 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-734511"
	I1221 20:27:46.643163  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.643341  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.644615  366768 out.go:179] * Verifying Kubernetes components...
	I1221 20:27:46.646107  366768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 20:27:46.668529  366768 addons.go:239] Setting addon default-storageclass=true in "newest-cni-734511"
	W1221 20:27:46.668549  366768 addons.go:248] addon default-storageclass should already be in state true
	I1221 20:27:46.668571  366768 host.go:66] Checking if "newest-cni-734511" exists ...
	I1221 20:27:46.668906  366768 cli_runner.go:164] Run: docker container inspect newest-cni-734511 --format={{.State.Status}}
	I1221 20:27:46.669412  366768 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1221 20:27:46.669424  366768 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 20:27:46.670744  366768 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1221 20:27:43.636464  355293 pod_ready.go:104] pod "coredns-66bc5c9577-bp67f" is not "Ready", error: <nil>
	I1221 20:27:45.637045  355293 pod_ready.go:94] pod "coredns-66bc5c9577-bp67f" is "Ready"
	I1221 20:27:45.637079  355293 pod_ready.go:86] duration metric: took 31.005880117s for pod "coredns-66bc5c9577-bp67f" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.639371  355293 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.643368  355293 pod_ready.go:94] pod "etcd-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.643393  355293 pod_ready.go:86] duration metric: took 3.995822ms for pod "etcd-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.645204  355293 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.649549  355293 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.649576  355293 pod_ready.go:86] duration metric: took 4.334095ms for pod "kube-apiserver-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.651465  355293 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:45.835343  355293 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:45.835366  355293 pod_ready.go:86] duration metric: took 183.883765ms for pod "kube-controller-manager-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.035541  355293 pod_ready.go:83] waiting for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.435715  355293 pod_ready.go:94] pod "kube-proxy-w9lgb" is "Ready"
	I1221 20:27:46.435746  355293 pod_ready.go:86] duration metric: took 400.180233ms for pod "kube-proxy-w9lgb" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:46.634643  355293 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034660  355293 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-766361" is "Ready"
	I1221 20:27:47.034685  355293 pod_ready.go:86] duration metric: took 400.019644ms for pod "kube-scheduler-default-k8s-diff-port-766361" in "kube-system" namespace to be "Ready" or be gone ...
	I1221 20:27:47.034697  355293 pod_ready.go:40] duration metric: took 32.40680352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1221 20:27:47.076294  355293 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1221 20:27:47.077955  355293 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-766361" cluster and "default" namespace by default
	I1221 20:27:46.670728  366768 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.670797  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 20:27:46.670848  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.671763  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1221 20:27:46.671780  366768 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1221 20:27:46.671829  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.700977  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.702794  366768 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.702814  366768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 20:27:46.702867  366768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-734511
	I1221 20:27:46.708071  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.726576  366768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/newest-cni-734511/id_rsa Username:docker}
	I1221 20:27:46.783599  366768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1221 20:27:46.796337  366768 api_server.go:52] waiting for apiserver process to appear ...
	I1221 20:27:46.796401  366768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:27:46.809276  366768 api_server.go:72] duration metric: took 167.144497ms to wait for apiserver process to appear ...
	I1221 20:27:46.809302  366768 api_server.go:88] waiting for apiserver healthz status ...
	I1221 20:27:46.809324  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:46.817287  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 20:27:46.821194  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1221 20:27:46.821236  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1221 20:27:46.837316  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1221 20:27:46.837342  366768 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1221 20:27:46.838461  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 20:27:46.852066  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1221 20:27:46.852094  366768 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1221 20:27:46.867040  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1221 20:27:46.867061  366768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1221 20:27:46.880590  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1221 20:27:46.880613  366768 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1221 20:27:46.893474  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1221 20:27:46.893500  366768 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1221 20:27:46.905440  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1221 20:27:46.905462  366768 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1221 20:27:46.917382  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1221 20:27:46.917402  366768 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1221 20:27:46.929133  366768 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:46.929151  366768 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1221 20:27:46.941146  366768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1221 20:27:48.329199  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1221 20:27:48.329247  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1221 20:27:48.329271  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.340161  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.340244  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.809402  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:48.813323  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:48.813346  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:48.847081  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.029754993s)
	I1221 20:27:48.847159  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00866423s)
	I1221 20:27:48.847289  366768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.906109829s)
	I1221 20:27:48.850396  366768 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-734511 addons enable metrics-server
	
	I1221 20:27:48.857477  366768 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1221 20:27:48.858708  366768 addons.go:530] duration metric: took 2.216387065s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1221 20:27:49.309469  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.314167  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1221 20:27:49.314201  366768 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1221 20:27:49.809466  366768 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1221 20:27:49.813534  366768 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1221 20:27:49.814524  366768 api_server.go:141] control plane version: v1.35.0-rc.1
	I1221 20:27:49.814550  366768 api_server.go:131] duration metric: took 3.005240792s to wait for apiserver health ...
	I1221 20:27:49.814561  366768 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 20:27:49.818217  366768 system_pods.go:59] 8 kube-system pods found
	I1221 20:27:49.818279  366768 system_pods.go:61] "coredns-7d764666f9-jlczz" [8571aecb-77d8-4d07-90b2-fd10aca80bcd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818296  366768 system_pods.go:61] "etcd-newest-cni-734511" [5f6a8b90-3b7d-433a-8e62-fc0be1f726a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1221 20:27:49.818307  366768 system_pods.go:61] "kindnet-ztvbb" [0bd0fcd8-ea44-43e6-84d4-0a7bc95a3e9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1221 20:27:49.818319  366768 system_pods.go:61] "kube-apiserver-newest-cni-734511" [d0ac5067-f06f-4fff-853f-483d61d3a345] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1221 20:27:49.818330  366768 system_pods.go:61] "kube-controller-manager-newest-cni-734511" [fcb485ed-488d-41fb-b94c-dd1321961ccd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1221 20:27:49.818340  366768 system_pods.go:61] "kube-proxy-9mrbd" [462d4133-ac15-436a-91fe-13e1ec9c1430] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1221 20:27:49.818346  366768 system_pods.go:61] "kube-scheduler-newest-cni-734511" [e0670313-ee97-46e9-9090-98628a7613e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1221 20:27:49.818353  366768 system_pods.go:61] "storage-provisioner" [5bfed1a9-5cd0-45a6-abf9-ae34c8f2ab35] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1221 20:27:49.818359  366768 system_pods.go:74] duration metric: took 3.791516ms to wait for pod list to return data ...
	I1221 20:27:49.818368  366768 default_sa.go:34] waiting for default service account to be created ...
	I1221 20:27:49.820504  366768 default_sa.go:45] found service account: "default"
	I1221 20:27:49.820526  366768 default_sa.go:55] duration metric: took 2.152518ms for default service account to be created ...
	I1221 20:27:49.820542  366768 kubeadm.go:587] duration metric: took 3.178410939s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1221 20:27:49.820567  366768 node_conditions.go:102] verifying NodePressure condition ...
	I1221 20:27:49.822831  366768 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 20:27:49.822855  366768 node_conditions.go:123] node cpu capacity is 8
	I1221 20:27:49.822871  366768 node_conditions.go:105] duration metric: took 2.298304ms to run NodePressure ...
	I1221 20:27:49.822886  366768 start.go:242] waiting for startup goroutines ...
	I1221 20:27:49.822900  366768 start.go:247] waiting for cluster config update ...
	I1221 20:27:49.822919  366768 start.go:256] writing updated cluster config ...
	I1221 20:27:49.823160  366768 ssh_runner.go:195] Run: rm -f paused
	I1221 20:27:49.870266  366768 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1221 20:27:49.872014  366768 out.go:179] * Done! kubectl is now configured to use "newest-cni-734511" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 20:27:23 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:23.941862668Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 21 20:27:23 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:23.94580697Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 21 20:27:23 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:23.945826534Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.092721491Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b762544-6f03-4e59-8f24-acc663d446d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.093741496Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=46be6827-6cf7-46cb-bcb6-140e924ead83 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.094832979Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper" id=7192d3fc-9628-46a9-9232-7359c252ee23 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.094976772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.101202634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.101665291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.126452251Z" level=info msg="Created container 57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper" id=7192d3fc-9628-46a9-9232-7359c252ee23 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.127036378Z" level=info msg="Starting container: 57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80" id=b14644de-ecd7-455a-a345-7b328b5da13c name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.12869025Z" level=info msg="Started container" PID=1769 containerID=57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper id=b14644de-ecd7-455a-a345-7b328b5da13c name=/runtime.v1.RuntimeService/StartContainer sandboxID=92dd1c0857b1b9886500ab3ffb07fbc9ce8720780d3c0072b3857a5fb5cfcbb6
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.212726428Z" level=info msg="Removing container: 394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb" id=6b8b2014-80d4-4857-8c82-656f4019709f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:41 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:41.222467508Z" level=info msg="Removed container 394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x/dashboard-metrics-scraper" id=6b8b2014-80d4-4857-8c82-656f4019709f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.223627168Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=131a03e1-61e4-4c30-98ca-866145b10bde name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.224574085Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=60df9305-8842-4700-b122-f1ea56567b23 name=/runtime.v1.ImageService/ImageStatus
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.225545267Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e1b52d0b-337b-453b-b181-b2f4dda3f788 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.225703055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230271704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230463253Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7f4d3c8b0f7c071d7d2f731660037640792c327e154d1371e4a10441cf5c4ce6/merged/etc/passwd: no such file or directory"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230498073Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7f4d3c8b0f7c071d7d2f731660037640792c327e154d1371e4a10441cf5c4ce6/merged/etc/group: no such file or directory"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.230772215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.258575185Z" level=info msg="Created container 91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0: kube-system/storage-provisioner/storage-provisioner" id=e1b52d0b-337b-453b-b181-b2f4dda3f788 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.25915535Z" level=info msg="Starting container: 91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0" id=a26f4bd5-5418-4e5a-a114-4ca054f3d3e3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 20:27:44 default-k8s-diff-port-766361 crio[566]: time="2025-12-21T20:27:44.261346231Z" level=info msg="Started container" PID=1783 containerID=91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0 description=kube-system/storage-provisioner/storage-provisioner id=a26f4bd5-5418-4e5a-a114-4ca054f3d3e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bde1ad0d91ba03b41039ce833b7a4a4e848b1df6424de4573c5452134238e8d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	91d99edda8967       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   9bde1ad0d91ba       storage-provisioner                                    kube-system
	57a39e576411a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   92dd1c0857b1b       dashboard-metrics-scraper-6ffb444bf9-65f6x             kubernetes-dashboard
	ed1a2848594e0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   8a7ac83b87651       kubernetes-dashboard-855c9754f9-n9w5g                  kubernetes-dashboard
	6562f43639a32       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   69dc90c1efad0       coredns-66bc5c9577-bp67f                               kube-system
	8565441180c29       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   898af142ec82b       busybox                                                default
	0c541ab1c15fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   9bde1ad0d91ba       storage-provisioner                                    kube-system
	12105efc4f2b7       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           49 seconds ago      Running             kube-proxy                  0                   c7abc9ce1baa2       kube-proxy-w9lgb                                       kube-system
	e6caa72f4d923       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           49 seconds ago      Running             kindnet-cni                 0                   bb97aa2b3bcf3       kindnet-td7vw                                          kube-system
	95eb61e08ac54       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           52 seconds ago      Running             kube-apiserver              0                   cf9d335a91a32       kube-apiserver-default-k8s-diff-port-766361            kube-system
	bc4bf9240c4aa       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   5c93786db27c4       etcd-default-k8s-diff-port-766361                      kube-system
	bf48b58ae55f3       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           52 seconds ago      Running             kube-scheduler              0                   017ae0ffacfbd       kube-scheduler-default-k8s-diff-port-766361            kube-system
	7c08998468c34       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           52 seconds ago      Running             kube-controller-manager     0                   751ded6a00ac6       kube-controller-manager-default-k8s-diff-port-766361   kube-system
	
	
	==> coredns [6562f43639a320e098d9e4ad843cc037d45453fa65cb6cb1e4248d06d8197488] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58688 - 57030 "HINFO IN 4360920200132600657.7573900220501448968. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091659819s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-766361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-766361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=606da7122583b5a79b82859b38097457cda6198c
	                    minikube.k8s.io/name=default-k8s-diff-port-766361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_21T20_26_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Dec 2025 20:26:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-766361
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Dec 2025 20:27:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Dec 2025 20:27:43 +0000   Sun, 21 Dec 2025 20:26:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-766361
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 e90d1346af8fcc716e41ac1169465ff8
	  System UUID:                fa186ebe-d952-42e9-84eb-564f086c9a9b
	  Boot ID:                    be97452c-103c-43c9-bea2-1ebf44ce6f18
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-bp67f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-766361                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-td7vw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-766361             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-766361    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-w9lgb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-766361             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-65f6x              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n9w5g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  116s (x8 over 117s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 117s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 117s)  kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node default-k8s-diff-port-766361 event: Registered Node default-k8s-diff-port-766361 in Controller
	  Normal  NodeReady                93s                  kubelet          Node default-k8s-diff-port-766361 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node default-k8s-diff-port-766361 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node default-k8s-diff-port-766361 event: Registered Node default-k8s-diff-port-766361 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 1e 35 9a 71 31 1e 8e cc 49 2a 3f b6 08 00
	[Dec21 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[Dec21 20:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 62 23 df b6 20 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 00 8b 1e 5d c7 08 06
	[ +13.247705] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[  +4.421077] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 1a 9c 5f 6e cf 60 08 06
	[  +0.000326] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 94 e1 20 43 8d 08 06
	[  +4.397778] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	[  +0.001780] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 cf 5f d8 ca 92 08 06
	[ +11.855140] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 4c 4a f4 d0 1c 08 06
	[  +0.000547] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 37 6d e4 13 eb 08 06
	[Dec21 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 00 0c bc ae 65 08 06
	[  +0.000312] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 19 fb cc 2c 43 08 06
	
	
	==> etcd [bc4bf9240c4aa100801fb683a3f157efc0f5b88c89dfdf68c17051a9beedf9e6] <==
	{"level":"warn","ts":"2025-12-21T20:27:12.290067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.297315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.306838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.313486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.321305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.328344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.335531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.343355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.351085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.378330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.385164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.392511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.399551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.406016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.412989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.419604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.427038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.434842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.442503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.450640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.457760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.471852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.479944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.486082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-21T20:27:12.534918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:28:03 up  1:10,  0 user,  load average: 3.04, 3.67, 2.77
	Linux default-k8s-diff-port-766361 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e6caa72f4d923f220f83a305f8088c750602dbeb5769494d0ffb6489592bbc58] <==
	I1221 20:27:13.621995       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1221 20:27:13.714262       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1221 20:27:13.715658       1 main.go:148] setting mtu 1500 for CNI 
	I1221 20:27:13.718497       1 main.go:178] kindnetd IP family: "ipv4"
	I1221 20:27:13.718539       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-21T20:27:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1221 20:27:13.924167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1221 20:27:13.924202       1 controller.go:381] "Waiting for informer caches to sync"
	I1221 20:27:13.924214       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1221 20:27:14.013029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1221 20:27:14.324509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1221 20:27:14.324549       1 metrics.go:72] Registering metrics
	I1221 20:27:14.324640       1 controller.go:711] "Syncing nftables rules"
	I1221 20:27:23.924440       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:23.924492       1 main.go:301] handling current node
	I1221 20:27:33.927346       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:33.927398       1 main.go:301] handling current node
	I1221 20:27:43.924501       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:43.924532       1 main.go:301] handling current node
	I1221 20:27:53.926355       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1221 20:27:53.926393       1 main.go:301] handling current node
	
	
	==> kube-apiserver [95eb61e08ac540d6ae7ad5633b067f39afa90c52f744f0c278ca8314fca227b7] <==
	I1221 20:27:13.029513       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 20:27:13.029519       1 cache.go:39] Caches are synced for autoregister controller
	I1221 20:27:13.029723       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1221 20:27:13.029945       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 20:27:13.030043       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1221 20:27:13.030082       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1221 20:27:13.030443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1221 20:27:13.030543       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1221 20:27:13.030610       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1221 20:27:13.036049       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1221 20:27:13.037354       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1221 20:27:13.046637       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 20:27:13.066325       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1221 20:27:13.249593       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1221 20:27:13.323721       1 controller.go:667] quota admission added evaluator for: namespaces
	I1221 20:27:13.355376       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1221 20:27:13.378896       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 20:27:13.386913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 20:27:13.442408       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.10.0"}
	I1221 20:27:13.461478       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.60.23"}
	I1221 20:27:13.940189       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 20:27:16.831275       1 controller.go:667] quota admission added evaluator for: endpoints
	I1221 20:27:16.881189       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:27:16.881189       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 20:27:16.933298       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7c08998468c34527ba728a9c36db81bc36b48cb65a5de4ad43a6c30cb725137f] <==
	I1221 20:27:16.378520       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1221 20:27:16.378521       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1221 20:27:16.378551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1221 20:27:16.378561       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1221 20:27:16.378680       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1221 20:27:16.378700       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1221 20:27:16.378980       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1221 20:27:16.378992       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1221 20:27:16.379048       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1221 20:27:16.379200       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1221 20:27:16.379311       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1221 20:27:16.379388       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1221 20:27:16.379413       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-766361"
	I1221 20:27:16.380008       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1221 20:27:16.380677       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1221 20:27:16.384178       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1221 20:27:16.384352       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1221 20:27:16.395344       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1221 20:27:16.395390       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1221 20:27:16.395418       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1221 20:27:16.395427       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1221 20:27:16.395433       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1221 20:27:16.396679       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1221 20:27:16.398994       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1221 20:27:16.408346       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [12105efc4f2b781f722122e1b964d9ab68c8321dae8011e99c3d709752394fcb] <==
	I1221 20:27:13.548662       1 server_linux.go:53] "Using iptables proxy"
	I1221 20:27:13.611894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1221 20:27:13.712109       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1221 20:27:13.712162       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1221 20:27:13.712314       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1221 20:27:13.735687       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 20:27:13.735752       1 server_linux.go:132] "Using iptables Proxier"
	I1221 20:27:13.740828       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1221 20:27:13.741267       1 server.go:527] "Version info" version="v1.34.3"
	I1221 20:27:13.741308       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:13.742645       1 config.go:309] "Starting node config controller"
	I1221 20:27:13.742668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1221 20:27:13.742678       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1221 20:27:13.742714       1 config.go:403] "Starting serviceCIDR config controller"
	I1221 20:27:13.742719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1221 20:27:13.742743       1 config.go:200] "Starting service config controller"
	I1221 20:27:13.742748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1221 20:27:13.742767       1 config.go:106] "Starting endpoint slice config controller"
	I1221 20:27:13.742776       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1221 20:27:13.842850       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1221 20:27:13.842867       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1221 20:27:13.842867       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bf48b58ae55f3b51f0d2af85c0df86114d64b6024941d8054a0cca8fbb7e30b0] <==
	I1221 20:27:11.311623       1 serving.go:386] Generated self-signed cert in-memory
	W1221 20:27:12.962600       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1221 20:27:12.962639       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 20:27:12.962667       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1221 20:27:12.962678       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1221 20:27:12.992187       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1221 20:27:12.992292       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 20:27:12.995392       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:27:12.995436       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1221 20:27:12.995855       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1221 20:27:12.995925       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1221 20:27:13.096069       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 21 20:27:13 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:13.246508     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678-xtables-lock\") pod \"kindnet-td7vw\" (UID: \"75b37ef9-1b3a-4fb8-b85b-d0a15d6c4678\") " pod="kube-system/kindnet-td7vw"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170198     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/19812d2c-6bef-4834-9d61-fa7abe6c3083-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-65f6x\" (UID: \"19812d2c-6bef-4834-9d61-fa7abe6c3083\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170280     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a18d7a00-dbc1-44ab-936f-eb9fda84c23b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-n9w5g\" (UID: \"a18d7a00-dbc1-44ab-936f-eb9fda84c23b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9w5g"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170308     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7sr\" (UniqueName: \"kubernetes.io/projected/a18d7a00-dbc1-44ab-936f-eb9fda84c23b-kube-api-access-zx7sr\") pod \"kubernetes-dashboard-855c9754f9-n9w5g\" (UID: \"a18d7a00-dbc1-44ab-936f-eb9fda84c23b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9w5g"
	Dec 21 20:27:17 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:17.170334     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs92q\" (UniqueName: \"kubernetes.io/projected/19812d2c-6bef-4834-9d61-fa7abe6c3083-kube-api-access-fs92q\") pod \"dashboard-metrics-scraper-6ffb444bf9-65f6x\" (UID: \"19812d2c-6bef-4834-9d61-fa7abe6c3083\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x"
	Dec 21 20:27:20 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:20.143660     732 scope.go:117] "RemoveContainer" containerID="64050056b60fb7c8b94970591bb2207a20e0740bb12c876f067094dcf21e00f0"
	Dec 21 20:27:21 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:21.154638     732 scope.go:117] "RemoveContainer" containerID="64050056b60fb7c8b94970591bb2207a20e0740bb12c876f067094dcf21e00f0"
	Dec 21 20:27:21 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:21.156042     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:21 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:21.156279     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:22 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:22.159599     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:22 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:22.160304     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:24 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:24.176976     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n9w5g" podStartSLOduration=1.732714769 podStartE2EDuration="8.176951691s" podCreationTimestamp="2025-12-21 20:27:16 +0000 UTC" firstStartedPulling="2025-12-21 20:27:17.331209534 +0000 UTC m=+7.326332476" lastFinishedPulling="2025-12-21 20:27:23.775446456 +0000 UTC m=+13.770569398" observedRunningTime="2025-12-21 20:27:24.176584114 +0000 UTC m=+14.171707073" watchObservedRunningTime="2025-12-21 20:27:24.176951691 +0000 UTC m=+14.172074637"
	Dec 21 20:27:30 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:30.012655     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:30 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:30.012883     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:41.092081     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:41.211572     732 scope.go:117] "RemoveContainer" containerID="394bf6d6221c9be572201ac192c4fdd221240d7a33cdca135be8355e5202abfb"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:41.211831     732 scope.go:117] "RemoveContainer" containerID="57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80"
	Dec 21 20:27:41 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:41.212053     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:44 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:44.223213     732 scope.go:117] "RemoveContainer" containerID="0c541ab1c15fd8214ad40db5481d004462ddeed2aeddecaf01bc82624ff4cf84"
	Dec 21 20:27:50 default-k8s-diff-port-766361 kubelet[732]: I1221 20:27:50.011963     732 scope.go:117] "RemoveContainer" containerID="57a39e576411a9140ae52375790f197f403659e01ab391108f2a64114dd53f80"
	Dec 21 20:27:50 default-k8s-diff-port-766361 kubelet[732]: E1221 20:27:50.012163     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-65f6x_kubernetes-dashboard(19812d2c-6bef-4834-9d61-fa7abe6c3083)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-65f6x" podUID="19812d2c-6bef-4834-9d61-fa7abe6c3083"
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 21 20:27:59 default-k8s-diff-port-766361 systemd[1]: kubelet.service: Consumed 1.588s CPU time.
	
	
	==> kubernetes-dashboard [ed1a2848594e0790b69aa5bd98a39232a7761c6729fca3b526d211ed609091f6] <==
	2025/12/21 20:27:23 Using namespace: kubernetes-dashboard
	2025/12/21 20:27:23 Using in-cluster config to connect to apiserver
	2025/12/21 20:27:23 Using secret token for csrf signing
	2025/12/21 20:27:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/21 20:27:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/21 20:27:23 Successful initial request to the apiserver, version: v1.34.3
	2025/12/21 20:27:23 Generating JWE encryption key
	2025/12/21 20:27:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/21 20:27:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/21 20:27:23 Initializing JWE encryption key from synchronized object
	2025/12/21 20:27:23 Creating in-cluster Sidecar client
	2025/12/21 20:27:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:27:24 Serving insecurely on HTTP port: 9090
	2025/12/21 20:27:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/21 20:27:23 Starting overwatch
	
	
	==> storage-provisioner [0c541ab1c15fd8214ad40db5481d004462ddeed2aeddecaf01bc82624ff4cf84] <==
	I1221 20:27:13.500834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1221 20:27:43.505660       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [91d99edda896720cb56583086770434c04c65f4d80dee22293023cc35d4568b0] <==
	I1221 20:27:44.274188       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 20:27:44.281798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 20:27:44.281833       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1221 20:27:44.283908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:47.739036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:51.999184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:55.598403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:27:58.651368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:28:01.673538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:28:01.678087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:28:01.678216       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 20:28:01.678401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766361_b10bcb71-4483-4f94-9f25-5c591e44dec1!
	I1221 20:28:01.678340       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"577e970a-eb7c-428e-948b-c188b50d25b7", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-766361_b10bcb71-4483-4f94-9f25-5c591e44dec1 became leader
	W1221 20:28:01.680219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1221 20:28:01.683674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1221 20:28:01.778696       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766361_b10bcb71-4483-4f94-9f25-5c591e44dec1!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361: exit status 2 (314.438426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-766361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.19s)

                                                
                                    

Test pass (359/419)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.17
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 2.86
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.22
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-rc.1/json-events 3.26
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.38
30 TestBinaryMirror 0.8
31 TestOffline 61.87
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 95.49
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.41
57 TestAddons/StoppedEnableDisable 16.61
58 TestCertOptions 23.2
59 TestCertExpiration 208.06
61 TestForceSystemdFlag 29.04
62 TestForceSystemdEnv 25.22
67 TestErrorSpam/setup 19.54
68 TestErrorSpam/start 0.63
69 TestErrorSpam/status 0.91
70 TestErrorSpam/pause 6.53
71 TestErrorSpam/unpause 4.97
72 TestErrorSpam/stop 8.07
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 39.18
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.97
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.58
84 TestFunctional/serial/CacheCmd/cache/add_local 1.23
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 47.96
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.14
95 TestFunctional/serial/LogsFileCmd 1.15
96 TestFunctional/serial/InvalidService 3.75
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 6.94
100 TestFunctional/parallel/DryRun 0.41
101 TestFunctional/parallel/InternationalLanguage 0.15
102 TestFunctional/parallel/StatusCmd 0.94
106 TestFunctional/parallel/ServiceCmdConnect 7.65
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 19.17
110 TestFunctional/parallel/SSHCmd 0.62
111 TestFunctional/parallel/CpCmd 1.97
112 TestFunctional/parallel/MySQL 21.34
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 1.83
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
122 TestFunctional/parallel/License 0.37
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.48
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.56
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.43
130 TestFunctional/parallel/ImageCommands/Setup 0.99
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.56
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.06
139 TestFunctional/parallel/ProfileCmd/profile_list 0.63
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.32
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.85
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
155 TestFunctional/parallel/ServiceCmd/DeployApp 6.15
156 TestFunctional/parallel/MountCmd/any-port 5.83
157 TestFunctional/parallel/ServiceCmd/List 0.95
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.75
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.62
160 TestFunctional/parallel/ServiceCmd/Format 0.62
161 TestFunctional/parallel/MountCmd/specific-port 1.86
162 TestFunctional/parallel/ServiceCmd/URL 0.6
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 36.41
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 21.96
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.04
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.56
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.18
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.55
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 56.11
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.19
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.19
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 5.6
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.44
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 7.11
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.37
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.21
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.04
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 9.66
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.15
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 25.73
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.58
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.79
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 47.71
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.88
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.62
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.4
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.08
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.55
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.53
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.23
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.24
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.25
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.1
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.41
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.14
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.15
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.14
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.33
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.91
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 15.2
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.37
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.71
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 2.44
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 3.8
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 7.12
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.4
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.39
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.38
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 5.57
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.89
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.89
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.54
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.56
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.88
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.58
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.87
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 155.12
266 TestMultiControlPlane/serial/DeployApp 3.54
267 TestMultiControlPlane/serial/PingHostFromPods 1.01
268 TestMultiControlPlane/serial/AddWorkerNode 27.82
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
271 TestMultiControlPlane/serial/CopyFile 16.73
272 TestMultiControlPlane/serial/StopSecondaryNode 14.63
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.54
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.16
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.6
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
279 TestMultiControlPlane/serial/StopCluster 48.49
280 TestMultiControlPlane/serial/RestartCluster 54.48
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
282 TestMultiControlPlane/serial/AddSecondaryNode 39.15
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
288 TestJSONOutput/start/Command 37.59
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 7.96
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.22
313 TestKicCustomNetwork/create_custom_network 25.78
314 TestKicCustomNetwork/use_default_bridge_network 23.14
315 TestKicExistingNetwork 23.19
316 TestKicCustomSubnet 25.24
317 TestKicStaticIP 26.05
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 47.32
322 TestMountStart/serial/StartWithMountFirst 4.69
323 TestMountStart/serial/VerifyMountFirst 0.26
324 TestMountStart/serial/StartWithMountSecond 4.69
325 TestMountStart/serial/VerifyMountSecond 0.26
326 TestMountStart/serial/DeleteFirst 1.66
327 TestMountStart/serial/VerifyMountPostDelete 0.26
328 TestMountStart/serial/Stop 1.24
329 TestMountStart/serial/RestartStopped 7.1
330 TestMountStart/serial/VerifyMountPostStop 0.26
333 TestMultiNode/serial/FreshStart2Nodes 66.3
334 TestMultiNode/serial/DeployApp2Nodes 3.12
335 TestMultiNode/serial/PingHostFrom2Pods 0.69
336 TestMultiNode/serial/AddNode 27.23
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.64
339 TestMultiNode/serial/CopyFile 9.57
340 TestMultiNode/serial/StopNode 2.22
341 TestMultiNode/serial/StartAfterStop 7.08
342 TestMultiNode/serial/RestartKeepsNodes 80.49
343 TestMultiNode/serial/DeleteNode 5.19
344 TestMultiNode/serial/StopMultiNode 30.75
345 TestMultiNode/serial/RestartMultiNode 44.43
346 TestMultiNode/serial/ValidateNameConflict 24.87
353 TestScheduledStopUnix 97.43
356 TestInsufficientStorage 8.7
357 TestRunningBinaryUpgrade 322.59
359 TestKubernetesUpgrade 135.12
360 TestMissingContainerUpgrade 89.88
361 TestStoppedBinaryUpgrade/Setup 0.55
363 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
364 TestNoKubernetes/serial/StartWithK8s 42.99
365 TestStoppedBinaryUpgrade/Upgrade 304.28
366 TestNoKubernetes/serial/StartWithStopK8s 25.62
367 TestNoKubernetes/serial/Start 8.16
368 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
369 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
370 TestNoKubernetes/serial/ProfileList 19.37
371 TestNoKubernetes/serial/Stop 1.35
372 TestNoKubernetes/serial/StartNoArgs 6.59
373 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
381 TestNetworkPlugins/group/false 3.71
392 TestPreload/Start-NoPreload-PullImage 60.13
393 TestPreload/Restart-With-Preload-Check-User-Image 49.9
396 TestPause/serial/Start 41.89
397 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
398 TestNetworkPlugins/group/auto/Start 43.73
399 TestPause/serial/SecondStartNoReconfiguration 5.72
401 TestNetworkPlugins/group/kindnet/Start 41.87
402 TestNetworkPlugins/group/auto/KubeletFlags 0.29
403 TestNetworkPlugins/group/auto/NetCatPod 8.19
404 TestNetworkPlugins/group/auto/DNS 0.13
405 TestNetworkPlugins/group/auto/Localhost 0.11
406 TestNetworkPlugins/group/auto/HairPin 0.11
407 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
408 TestNetworkPlugins/group/calico/Start 46.08
409 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
410 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
411 TestNetworkPlugins/group/kindnet/DNS 0.1
412 TestNetworkPlugins/group/kindnet/Localhost 0.09
413 TestNetworkPlugins/group/kindnet/HairPin 0.09
414 TestNetworkPlugins/group/custom-flannel/Start 53.64
415 TestNetworkPlugins/group/enable-default-cni/Start 63.62
416 TestNetworkPlugins/group/calico/ControllerPod 6.01
417 TestNetworkPlugins/group/calico/KubeletFlags 0.32
418 TestNetworkPlugins/group/calico/NetCatPod 10.18
419 TestNetworkPlugins/group/flannel/Start 43.1
420 TestNetworkPlugins/group/calico/DNS 0.12
421 TestNetworkPlugins/group/calico/Localhost 0.11
422 TestNetworkPlugins/group/calico/HairPin 0.1
423 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
424 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.22
425 TestNetworkPlugins/group/bridge/Start 67.39
426 TestNetworkPlugins/group/custom-flannel/DNS 0.11
427 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
428 TestNetworkPlugins/group/custom-flannel/HairPin 0.08
429 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
430 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
431 TestNetworkPlugins/group/flannel/ControllerPod 6.01
432 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
433 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
434 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
436 TestStartStop/group/old-k8s-version/serial/FirstStart 52.52
437 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
438 TestNetworkPlugins/group/flannel/NetCatPod 11.22
439 TestNetworkPlugins/group/flannel/DNS 0.12
440 TestNetworkPlugins/group/flannel/Localhost 0.1
441 TestNetworkPlugins/group/flannel/HairPin 0.1
443 TestStartStop/group/no-preload/serial/FirstStart 46.12
445 TestStartStop/group/embed-certs/serial/FirstStart 43.98
446 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
447 TestNetworkPlugins/group/bridge/NetCatPod 8.23
448 TestNetworkPlugins/group/bridge/DNS 0.14
449 TestNetworkPlugins/group/bridge/Localhost 0.1
450 TestNetworkPlugins/group/bridge/HairPin 0.09
451 TestStartStop/group/old-k8s-version/serial/DeployApp 7.26
453 TestStartStop/group/old-k8s-version/serial/Stop 15.98
454 TestStartStop/group/no-preload/serial/DeployApp 8.24
456 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.34
458 TestStartStop/group/no-preload/serial/Stop 18.2
459 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
460 TestStartStop/group/old-k8s-version/serial/SecondStart 44.42
461 TestStartStop/group/embed-certs/serial/DeployApp 8.27
463 TestStartStop/group/embed-certs/serial/Stop 16.69
464 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
465 TestStartStop/group/no-preload/serial/SecondStart 51.34
466 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
467 TestStartStop/group/embed-certs/serial/SecondStart 47.18
468 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
470 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
471 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.87
472 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
473 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
475 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
476 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.72
478 TestStartStop/group/newest-cni/serial/FirstStart 24.24
479 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
480 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
481 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
482 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
484 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
485 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
487 TestStartStop/group/newest-cni/serial/DeployApp 0
489 TestPreload/PreloadSrc/gcs 4.4
490 TestStartStop/group/newest-cni/serial/Stop 8.64
491 TestPreload/PreloadSrc/github 5.66
492 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
493 TestStartStop/group/newest-cni/serial/SecondStart 10.54
494 TestPreload/PreloadSrc/gcs-cached 0.47
495 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
496 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
497 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
498 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
500 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
501 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (4.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-940314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-940314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.16799696s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1221 19:46:15.651416   12711 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1221 19:46:15.651506   12711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-940314
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-940314: exit status 85 (72.585321ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-940314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-940314 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:11.534989   12723 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:11.535190   12723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:11.535199   12723 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:11.535202   12723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:11.535385   12723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	W1221 19:46:11.535500   12723 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22179-9159/.minikube/config/config.json: open /home/jenkins/minikube-integration/22179-9159/.minikube/config/config.json: no such file or directory
	I1221 19:46:11.535959   12723 out.go:368] Setting JSON to true
	I1221 19:46:11.536790   12723 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1720,"bootTime":1766344651,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:11.536836   12723 start.go:143] virtualization: kvm guest
	I1221 19:46:11.541602   12723 out.go:99] [download-only-940314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1221 19:46:11.541719   12723 preload.go:369] Failed to list preload files: open /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball: no such file or directory
	I1221 19:46:11.541771   12723 notify.go:221] Checking for updates...
	I1221 19:46:11.542791   12723 out.go:171] MINIKUBE_LOCATION=22179
	I1221 19:46:11.544155   12723 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:11.545235   12723 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:46:11.546336   12723 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:46:11.550762   12723 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1221 19:46:11.552824   12723 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 19:46:11.553044   12723 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:46:11.575825   12723 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:46:11.575875   12723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:11.794333   12723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-21 19:46:11.78490816 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:11.794446   12723 docker.go:319] overlay module found
	I1221 19:46:11.795939   12723 out.go:99] Using the docker driver based on user configuration
	I1221 19:46:11.795972   12723 start.go:309] selected driver: docker
	I1221 19:46:11.795980   12723 start.go:928] validating driver "docker" against <nil>
	I1221 19:46:11.796068   12723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:11.849633   12723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-21 19:46:11.840247528 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:11.849774   12723 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 19:46:11.850271   12723 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 19:46:11.850437   12723 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 19:46:11.852011   12723 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-940314 host does not exist
	  To start a cluster, run: "minikube start -p download-only-940314"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-940314
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (2.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-650604 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-650604 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.864045148s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (2.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1221 19:46:18.936050   12711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1221 19:46:18.936078   12711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-650604
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-650604: exit status 85 (69.224968ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-940314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-940314 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-940314                                                                                                                                                   │ download-only-940314 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-650604 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-650604 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:16.121755   13083 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:16.121856   13083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:16.121867   13083 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:16.121873   13083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:16.122070   13083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:46:16.122563   13083 out.go:368] Setting JSON to true
	I1221 19:46:16.123332   13083 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1725,"bootTime":1766344651,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:16.123378   13083 start.go:143] virtualization: kvm guest
	I1221 19:46:16.125275   13083 out.go:99] [download-only-650604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:16.125404   13083 notify.go:221] Checking for updates...
	I1221 19:46:16.126708   13083 out.go:171] MINIKUBE_LOCATION=22179
	I1221 19:46:16.127888   13083 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:16.129075   13083 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:46:16.132794   13083 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:46:16.133920   13083 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1221 19:46:16.136071   13083 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 19:46:16.136312   13083 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:46:16.158837   13083 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:46:16.158945   13083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:16.212595   13083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-21 19:46:16.201663841 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:16.212686   13083 docker.go:319] overlay module found
	I1221 19:46:16.214173   13083 out.go:99] Using the docker driver based on user configuration
	I1221 19:46:16.214206   13083 start.go:309] selected driver: docker
	I1221 19:46:16.214218   13083 start.go:928] validating driver "docker" against <nil>
	I1221 19:46:16.214318   13083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:16.263398   13083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-21 19:46:16.254387615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:16.263543   13083 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 19:46:16.264139   13083 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 19:46:16.264342   13083 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 19:46:16.266025   13083 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-650604 host does not exist
	  To start a cluster, run: "minikube start -p download-only-650604"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-650604
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (3.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-551976 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-551976 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.256170462s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (3.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1221 19:46:22.618089   12711 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1221 19:46:22.618120   12711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-551976
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-551976: exit status 85 (71.369977ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-940314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-940314 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-940314                                                                                                                                                        │ download-only-940314 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-650604 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-650604 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ delete  │ -p download-only-650604                                                                                                                                                        │ download-only-650604 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │ 21 Dec 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-551976 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-551976 │ jenkins │ v1.37.0 │ 21 Dec 25 19:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/21 19:46:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 19:46:19.412043   13443 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:46:19.412155   13443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:19.412164   13443 out.go:374] Setting ErrFile to fd 2...
	I1221 19:46:19.412169   13443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:46:19.412367   13443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:46:19.412839   13443 out.go:368] Setting JSON to true
	I1221 19:46:19.413693   13443 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1728,"bootTime":1766344651,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:46:19.413742   13443 start.go:143] virtualization: kvm guest
	I1221 19:46:19.415648   13443 out.go:99] [download-only-551976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:46:19.415775   13443 notify.go:221] Checking for updates...
	I1221 19:46:19.416976   13443 out.go:171] MINIKUBE_LOCATION=22179
	I1221 19:46:19.418057   13443 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:46:19.419171   13443 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:46:19.420372   13443 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:46:19.424750   13443 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1221 19:46:19.426836   13443 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 19:46:19.427042   13443 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:46:19.450247   13443 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:46:19.450363   13443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:19.500854   13443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-21 19:46:19.492355587 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:19.500956   13443 docker.go:319] overlay module found
	I1221 19:46:19.502397   13443 out.go:99] Using the docker driver based on user configuration
	I1221 19:46:19.502417   13443 start.go:309] selected driver: docker
	I1221 19:46:19.502422   13443 start.go:928] validating driver "docker" against <nil>
	I1221 19:46:19.502504   13443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:46:19.557613   13443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-21 19:46:19.548184928 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:46:19.557810   13443 start_flags.go:329] no existing cluster config was found, will generate one from the flags 
	I1221 19:46:19.558449   13443 start_flags.go:413] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1221 19:46:19.558644   13443 start_flags.go:977] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 19:46:19.560320   13443 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-551976 host does not exist
	  To start a cluster, run: "minikube start -p download-only-551976"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-551976
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-556619 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-556619" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-556619
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1221 19:46:23.826770   12711 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-301733 --alsologtostderr --binary-mirror http://127.0.0.1:43353 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-301733" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-301733
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (61.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-594930 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-594930 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (59.507813918s)
helpers_test.go:176: Cleaning up "offline-crio-594930" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-594930
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-594930: (2.358618535s)
--- PASS: TestOffline (61.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-734405
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-734405: exit status 85 (63.908634ms)

                                                
                                                
-- stdout --
	* Profile "addons-734405" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-734405"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-734405
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-734405: exit status 85 (63.154439ms)

                                                
                                                
-- stdout --
	* Profile "addons-734405" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-734405"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (95.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-734405 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-734405 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m35.494200329s)
--- PASS: TestAddons/Setup (95.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-734405 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-734405 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-734405 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-734405 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a5a9677e-ccdd-4fb3-ad46-086786f62164] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a5a9677e-ccdd-4fb3-ad46-086786f62164] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003275843s
addons_test.go:696: (dbg) Run:  kubectl --context addons-734405 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-734405 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-734405 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-734405
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-734405: (16.336030737s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-734405
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-734405
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-734405
--- PASS: TestAddons/StoppedEnableDisable (16.61s)

                                                
                                    
x
+
TestCertOptions (23.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-746684 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-746684 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.130954539s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-746684 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-746684 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-746684 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-746684" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-746684
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-746684: (2.419560683s)
--- PASS: TestCertOptions (23.20s)

                                                
                                    
x
+
TestCertExpiration (208.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-026803 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-026803 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (19.740240657s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-026803 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-026803 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.71844237s)
helpers_test.go:176: Cleaning up "cert-expiration-026803" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-026803
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-026803: (2.604791584s)
--- PASS: TestCertExpiration (208.06s)

                                                
                                    
x
+
TestForceSystemdFlag (29.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-301440 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-301440 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.269516814s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-301440 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-301440" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-301440
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-301440: (2.470953621s)
--- PASS: TestForceSystemdFlag (29.04s)

                                                
                                    
x
+
TestForceSystemdEnv (25.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-558127 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-558127 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.760412073s)
helpers_test.go:176: Cleaning up "force-systemd-env-558127" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-558127
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-558127: (2.46086306s)
--- PASS: TestForceSystemdEnv (25.22s)

                                                
                                    
x
+
TestErrorSpam/setup (19.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-774887 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-774887 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-774887 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-774887 --driver=docker  --container-runtime=crio: (19.544592444s)
--- PASS: TestErrorSpam/setup (19.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (6.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause: exit status 80 (2.072714221s)

                                                
                                                
-- stdout --
	* Pausing node nospam-774887 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:51:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause: exit status 80 (2.308564691s)

                                                
                                                
-- stdout --
	* Pausing node nospam-774887 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:51:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause: exit status 80 (2.147655905s)

                                                
                                                
-- stdout --
	* Pausing node nospam-774887 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:51:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause: exit status 80 (1.343799695s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-774887 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:51:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause: exit status 80 (1.853817014s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-774887 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:51:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause: exit status 80 (1.776256631s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-774887 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-21T19:51:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.97s)

                                                
                                    
x
+
TestErrorSpam/stop (8.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 stop: (7.86939737s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-774887 --log_dir /tmp/nospam-774887 stop
--- PASS: TestErrorSpam/stop (8.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/test/nested/copy/12711/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675499 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-675499 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.174886221s)
--- PASS: TestFunctional/serial/StartWithProxy (39.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1221 19:52:41.232620   12711 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675499 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-675499 --alsologtostderr -v=8: (5.96527256s)
functional_test.go:678: soft start took 5.966051625s for "functional-675499" cluster.
I1221 19:52:47.198339   12711 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (5.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-675499 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-675499 /tmp/TestFunctionalserialCacheCmdcacheadd_local3983391077/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cache add minikube-local-cache-test:functional-675499
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cache delete minikube-local-cache-test:functional-675499
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-675499
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (270.539907ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 kubectl -- --context functional-675499 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-675499 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675499 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1221 19:53:00.988554   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:00.993823   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:01.004056   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:01.024299   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:01.064540   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:01.144853   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:01.305260   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:01.625810   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:02.266081   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:03.546541   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:06.108333   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:11.228585   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:53:21.468852   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-675499 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.958466559s)
functional_test.go:776: restart took 47.958587509s for "functional-675499" cluster.
I1221 19:53:41.339196   12711 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (47.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-675499 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 logs
E1221 19:53:41.949446   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 logs: (1.142735857s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 logs --file /tmp/TestFunctionalserialLogsFileCmd3882234156/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 logs --file /tmp/TestFunctionalserialLogsFileCmd3882234156/001/logs.txt: (1.153682182s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-675499 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-675499
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-675499: exit status 115 (326.701081ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31359 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-675499 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 config get cpus: exit status 14 (81.719874ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 config get cpus: exit status 14 (82.777078ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-675499 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-675499 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 51559: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675499 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-675499 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (158.956914ms)

                                                
                                                
-- stdout --
	* [functional-675499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:54:10.409330   50516 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:54:10.409580   50516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:54:10.409590   50516 out.go:374] Setting ErrFile to fd 2...
	I1221 19:54:10.409594   50516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:54:10.409782   50516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:54:10.410189   50516 out.go:368] Setting JSON to false
	I1221 19:54:10.411120   50516 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2199,"bootTime":1766344651,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:54:10.411176   50516 start.go:143] virtualization: kvm guest
	I1221 19:54:10.414354   50516 out.go:179] * [functional-675499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:54:10.415756   50516 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:54:10.415788   50516 notify.go:221] Checking for updates...
	I1221 19:54:10.417995   50516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:54:10.419128   50516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:54:10.423425   50516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:54:10.424767   50516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:54:10.425983   50516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:54:10.427596   50516 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:54:10.428102   50516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:54:10.451157   50516 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:54:10.451242   50516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:54:10.501673   50516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-21 19:54:10.492246287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:54:10.501827   50516 docker.go:319] overlay module found
	I1221 19:54:10.503289   50516 out.go:179] * Using the docker driver based on existing profile
	I1221 19:54:10.504427   50516 start.go:309] selected driver: docker
	I1221 19:54:10.504441   50516 start.go:928] validating driver "docker" against &{Name:functional-675499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-675499 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:54:10.504515   50516 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:54:10.506194   50516 out.go:203] 
	W1221 19:54:10.507301   50516 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1221 19:54:10.508333   50516 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675499 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-675499 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-675499 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (152.613779ms)

                                                
                                                
-- stdout --
	* [functional-675499] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:54:08.104942   49483 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:54:08.105025   49483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:54:08.105033   49483 out.go:374] Setting ErrFile to fd 2...
	I1221 19:54:08.105037   49483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:54:08.105329   49483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:54:08.105843   49483 out.go:368] Setting JSON to false
	I1221 19:54:08.107069   49483 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2197,"bootTime":1766344651,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:54:08.107139   49483 start.go:143] virtualization: kvm guest
	I1221 19:54:08.109996   49483 out.go:179] * [functional-675499] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1221 19:54:08.111322   49483 notify.go:221] Checking for updates...
	I1221 19:54:08.111332   49483 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:54:08.112608   49483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:54:08.114028   49483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:54:08.115270   49483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:54:08.116519   49483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:54:08.117875   49483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:54:08.119406   49483 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 19:54:08.119954   49483 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:54:08.142200   49483 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:54:08.142353   49483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:54:08.193152   49483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-21 19:54:08.184030274 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:54:08.193272   49483 docker.go:319] overlay module found
	I1221 19:54:08.194818   49483 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1221 19:54:08.195997   49483 start.go:309] selected driver: docker
	I1221 19:54:08.196011   49483 start.go:928] validating driver "docker" against &{Name:functional-675499 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-675499 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:54:08.196104   49483 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:54:08.197827   49483 out.go:203] 
	W1221 19:54:08.198866   49483 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 19:54:08.199949   49483 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-675499 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-675499 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-rvw44" [40e03f12-007c-46ff-86a7-bad79ca4c745] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-rvw44" [40e03f12-007c-46ff-86a7-bad79ca4c745] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003783461s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32609
functional_test.go:1680: http://192.168.49.2:32609: success! body:
Request served by hello-node-connect-7d85dfc575-rvw44

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32609
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [457e3b5d-542e-40fe-bfbd-5c6fa8956034] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003767207s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-675499 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-675499 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-675499 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-675499 apply -f testdata/storage-provisioner/pod.yaml
I1221 19:53:58.043905   12711 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [26c01a12-8662-4487-ba3d-aa26242c3cb4] Pending
helpers_test.go:353: "sp-pod" [26c01a12-8662-4487-ba3d-aa26242c3cb4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003784234s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-675499 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-675499 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-675499 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [edb5b74d-1644-4ccb-8c1a-aa3c8571131e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [edb5b74d-1644-4ccb-8c1a-aa3c8571131e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004840929s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-675499 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh -n functional-675499 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cp functional-675499:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1254091918/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh -n functional-675499 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh -n functional-675499 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-675499 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-2xvj6" [0a1c32b7-625d-48cb-9d0d-d4c9b927b2cb] Pending
helpers_test.go:353: "mysql-6bcdcbc558-2xvj6" [0a1c32b7-625d-48cb-9d0d-d4c9b927b2cb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-2xvj6" [0a1c32b7-625d-48cb-9d0d-d4c9b927b2cb] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003432023s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-675499 exec mysql-6bcdcbc558-2xvj6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-675499 exec mysql-6bcdcbc558-2xvj6 -- mysql -ppassword -e "show databases;": exit status 1 (97.14352ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1221 19:54:03.355449   12711 retry.go:84] will retry after 500ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-675499 exec mysql-6bcdcbc558-2xvj6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-675499 exec mysql-6bcdcbc558-2xvj6 -- mysql -ppassword -e "show databases;": exit status 1 (86.838636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-675499 exec mysql-6bcdcbc558-2xvj6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-675499 exec mysql-6bcdcbc558-2xvj6 -- mysql -ppassword -e "show databases;": exit status 1 (82.78245ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-675499 exec mysql-6bcdcbc558-2xvj6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12711/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo cat /etc/test/nested/copy/12711/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo cat /etc/ssl/certs/12711.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo cat /usr/share/ca-certificates/12711.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/127112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo cat /etc/ssl/certs/127112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/127112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo cat /usr/share/ca-certificates/127112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-675499 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh "sudo systemctl is-active docker": exit status 1 (311.770567ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh "sudo systemctl is-active containerd": exit status 1 (308.553056ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675499 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-675499
localhost/kicbase/echo-server:functional-675499
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675499 image ls --format short --alsologtostderr:
I1221 19:54:11.871460   51551 out.go:360] Setting OutFile to fd 1 ...
I1221 19:54:11.871593   51551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:11.871605   51551 out.go:374] Setting ErrFile to fd 2...
I1221 19:54:11.871611   51551 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:11.871923   51551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:54:11.872662   51551 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:11.872813   51551 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:11.873414   51551 cli_runner.go:164] Run: docker container inspect functional-675499 --format={{.State.Status}}
I1221 19:54:11.895736   51551 ssh_runner.go:195] Run: systemctl --version
I1221 19:54:11.895823   51551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675499
I1221 19:54:11.913712   51551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-675499/id_rsa Username:docker}
I1221 19:54:12.009206   51551 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675499 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-675499                     │ e7f404aef4fc5 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1                               │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3                               │ aa27095f56193 │ 89.1MB │
│ gcr.io/k8s-minikube/busybox             │ latest                                │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0                               │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3                               │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-675499                     │ 9056ab77afb8e │ 4.94MB │
│ localhost/my-image                      │ functional-675499                     │ 71c3ff7cc8f57 │ 1.47MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3                               │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3                               │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675499 image ls --format table --alsologtostderr:
I1221 19:54:15.746677   53521 out.go:360] Setting OutFile to fd 1 ...
I1221 19:54:15.746774   53521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:15.746785   53521 out.go:374] Setting ErrFile to fd 2...
I1221 19:54:15.746793   53521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:15.747007   53521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:54:15.747568   53521 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:15.747668   53521 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:15.748078   53521 cli_runner.go:164] Run: docker container inspect functional-675499 --format={{.State.Status}}
I1221 19:54:15.767831   53521 ssh_runner.go:195] Run: systemctl --version
I1221 19:54:15.767887   53521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675499
I1221 19:54:15.788972   53521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-675499/id_rsa Username:docker}
I1221 19:54:15.887569   53521 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675499 image ls --format json --alsologtostderr:
[{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256
:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd6682
2659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-675499"],"size":"4943877"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindes
t/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"66475f03b373160c0c8ad36b213f00e8524961d61ae0f2a3a69336b96766a0d4","repoDigests":["docker.io/library/8a1877cb9e2dc46cbffcd3519e2440c2b1d82e6ec984ed28c4a48fd350862580-tmp@sha256:5db57b5975660930d42a443ebeedf38fd5dc64e98485d23da2e5e5085ec8f68e"],"repoTags":[],"size":"1466132"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e7f404aef4fc514c094bcf5d98862a277995edd988da4a126030fd3be9d0fc4c","repoDig
ests":["localhost/minikube-local-cache-test@sha256:0d1740e64ab1be8332caebd4abc51dcf5bbdb04c974e90249f6ee709898f4c5f"],"repoTags":["localhost/minikube-local-cache-test:functional-675499"],"size":"3330"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@
sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"71c3ff7cc8f57283570c06cdd945f39d6d8a1cee5bceaca5d6b020b4568a5c04","repoDigests":["localhost/my-image@sha256:d17603b63bf8772637ea34020186d4e42283f58337e9ae918b4d0e9513220d22"],"repoTags":["localhost/my-image:functional-675499"],"size":"1468744"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f15965
2f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.
k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675499 image ls --format json --alsologtostderr:
I1221 19:54:15.208453   53283 out.go:360] Setting OutFile to fd 1 ...
I1221 19:54:15.208570   53283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:15.208582   53283 out.go:374] Setting ErrFile to fd 2...
I1221 19:54:15.208590   53283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:15.208908   53283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:54:15.209706   53283 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:15.209876   53283 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:15.210592   53283 cli_runner.go:164] Run: docker container inspect functional-675499 --format={{.State.Status}}
I1221 19:54:15.232839   53283 ssh_runner.go:195] Run: systemctl --version
I1221 19:54:15.232900   53283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675499
I1221 19:54:15.258045   53283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-675499/id_rsa Username:docker}
I1221 19:54:15.366217   53283 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675499 image ls --format yaml --alsologtostderr:
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-675499
size: "4943877"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e7f404aef4fc514c094bcf5d98862a277995edd988da4a126030fd3be9d0fc4c
repoDigests:
- localhost/minikube-local-cache-test@sha256:0d1740e64ab1be8332caebd4abc51dcf5bbdb04c974e90249f6ee709898f4c5f
repoTags:
- localhost/minikube-local-cache-test:functional-675499
size: "3330"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675499 image ls --format yaml --alsologtostderr:
I1221 19:54:12.097152   51634 out.go:360] Setting OutFile to fd 1 ...
I1221 19:54:12.097457   51634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:12.097470   51634 out.go:374] Setting ErrFile to fd 2...
I1221 19:54:12.097476   51634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:12.097777   51634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:54:12.098589   51634 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:12.098753   51634 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:12.099430   51634 cli_runner.go:164] Run: docker container inspect functional-675499 --format={{.State.Status}}
I1221 19:54:12.117908   51634 ssh_runner.go:195] Run: systemctl --version
I1221 19:54:12.117951   51634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675499
I1221 19:54:12.134334   51634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-675499/id_rsa Username:docker}
I1221 19:54:12.229721   51634 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh pgrep buildkitd: exit status 1 (268.882205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image build -t localhost/my-image:functional-675499 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 image build -t localhost/my-image:functional-675499 testdata/build --alsologtostderr: (1.850793515s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-675499 image build -t localhost/my-image:functional-675499 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 66475f03b37
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-675499
--> 71c3ff7cc8f
Successfully tagged localhost/my-image:functional-675499
71c3ff7cc8f57283570c06cdd945f39d6d8a1cee5bceaca5d6b020b4568a5c04
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-675499 image build -t localhost/my-image:functional-675499 testdata/build --alsologtostderr:
I1221 19:54:12.587306   51846 out.go:360] Setting OutFile to fd 1 ...
I1221 19:54:12.587577   51846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:12.587587   51846 out.go:374] Setting ErrFile to fd 2...
I1221 19:54:12.587590   51846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:54:12.587787   51846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:54:12.588318   51846 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:12.588962   51846 config.go:182] Loaded profile config "functional-675499": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1221 19:54:12.589399   51846 cli_runner.go:164] Run: docker container inspect functional-675499 --format={{.State.Status}}
I1221 19:54:12.606887   51846 ssh_runner.go:195] Run: systemctl --version
I1221 19:54:12.606959   51846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-675499
I1221 19:54:12.623770   51846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-675499/id_rsa Username:docker}
I1221 19:54:12.718235   51846 build_images.go:162] Building image from path: /tmp/build.1031196195.tar
I1221 19:54:12.718330   51846 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1221 19:54:12.725969   51846 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1031196195.tar
I1221 19:54:12.729321   51846 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1031196195.tar: stat -c "%s %y" /var/lib/minikube/build/build.1031196195.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1031196195.tar': No such file or directory
I1221 19:54:12.729346   51846 ssh_runner.go:362] scp /tmp/build.1031196195.tar --> /var/lib/minikube/build/build.1031196195.tar (3072 bytes)
I1221 19:54:12.746038   51846 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1031196195
I1221 19:54:12.753454   51846 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1031196195 -xf /var/lib/minikube/build/build.1031196195.tar
I1221 19:54:12.761057   51846 crio.go:315] Building image: /var/lib/minikube/build/build.1031196195
I1221 19:54:12.761102   51846 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-675499 /var/lib/minikube/build/build.1031196195 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1221 19:54:14.354068   51846 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-675499 /var/lib/minikube/build/build.1031196195 --cgroup-manager=cgroupfs: (1.592938522s)
I1221 19:54:14.354138   51846 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1031196195
I1221 19:54:14.364121   51846 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1031196195.tar
I1221 19:54:14.373663   51846 build_images.go:218] Built localhost/my-image:functional-675499 from /tmp/build.1031196195.tar
I1221 19:54:14.373697   51846 build_images.go:134] succeeded building to: functional-675499
I1221 19:54:14.373703   51846 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 image ls: (1.305401904s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-675499
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image load --daemon kicbase/echo-server:functional-675499 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 image load --daemon kicbase/echo-server:functional-675499 --alsologtostderr: (1.212376291s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-675499 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-675499 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-675499 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 46988: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-675499 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image load --daemon kicbase/echo-server:functional-675499 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 image load --daemon kicbase/echo-server:functional-675499 --alsologtostderr: (2.695947281s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 image ls: (3.365960568s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "527.262217ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "103.403218ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-675499 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-675499 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [56ff673f-413c-4a5c-86db-88f756145a17] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [56ff673f-413c-4a5c-86db-88f756145a17] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.003111402s
I1221 19:54:04.261171   12711 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "454.073017ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.769025ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-675499
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image load --daemon kicbase/echo-server:functional-675499 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image save kicbase/echo-server:functional-675499 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image rm kicbase/echo-server:functional-675499 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-675499
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 image save --daemon kicbase/echo-server:functional-675499 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-675499
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-675499 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.202.155 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-675499 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-675499 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-675499 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-pscqf" [ffa170fe-0223-4cc1-a06a-8d058bf32d49] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I1221 19:54:04.644874   12711 detect.go:223] nested VM detected
helpers_test.go:353: "hello-node-75c85bcc94-pscqf" [ffa170fe-0223-4cc1-a06a-8d058bf32d49] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003988679s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdany-port2326238457/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766346848205844971" to /tmp/TestFunctionalparallelMountCmdany-port2326238457/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766346848205844971" to /tmp/TestFunctionalparallelMountCmdany-port2326238457/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766346848205844971" to /tmp/TestFunctionalparallelMountCmdany-port2326238457/001/test-1766346848205844971
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.291024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 21 19:54 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 21 19:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 21 19:54 test-1766346848205844971
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh cat /mount-9p/test-1766346848205844971
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-675499 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [ea02a253-187b-45eb-a01a-81c77d64e3f2] Pending
helpers_test.go:353: "busybox-mount" [ea02a253-187b-45eb-a01a-81c77d64e3f2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [ea02a253-187b-45eb-a01a-81c77d64e3f2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [ea02a253-187b-45eb-a01a-81c77d64e3f2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004002042s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-675499 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdany-port2326238457/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-675499 service list -o json: (1.746314234s)
functional_test.go:1504: Took "1.746423197s" to run "out/minikube-linux-amd64 -p functional-675499 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30092
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdspecific-port1961588711/001:/mount-9p --alsologtostderr -v=1 --port 34281]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.353381ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1221 19:54:14.401405   12711 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdspecific-port1961588711/001:/mount-9p --alsologtostderr -v=1 --port 34281] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh "sudo umount -f /mount-9p": exit status 1 (299.539426ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-675499 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdspecific-port1961588711/001:/mount-9p --alsologtostderr -v=1 --port 34281] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30092
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1266717621/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1266717621/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1266717621/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T" /mount1: exit status 1 (329.766595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-675499 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-675499 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1266717621/001:/mount1 --alsologtostderr -v=1] ...
2025/12/21 19:54:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1266717621/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-675499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1266717621/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-675499
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-675499
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-675499
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22179-9159/.minikube/files/etc/test/nested/copy/12711/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (36.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463261 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1221 19:54:22.910661   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-463261 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (36.413903971s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (36.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (21.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1221 19:54:58.001305   12711 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463261 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-463261 --alsologtostderr -v=8: (21.960284528s)
functional_test.go:678: soft start took 21.960638815s for "functional-463261" cluster.
I1221 19:55:19.961921   12711 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (21.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-463261 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC3589685248/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cache add minikube-local-cache-test:functional-463261
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cache delete minikube-local-cache-test:functional-463261
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-463261
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.798987ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 kubectl -- --context functional-463261 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-463261 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (56.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463261 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1221 19:55:44.831216   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-463261 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.104905914s)
functional_test.go:776: restart took 56.105026109s for "functional-463261" cluster.
I1221 19:56:22.220850   12711 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (56.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-463261 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-463261 logs: (1.192939032s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1114279973/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-463261 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1114279973/001/logs.txt: (1.192058656s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (5.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-463261 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-463261
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-463261: exit status 115 (332.151444ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31402 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-463261 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-463261 delete -f testdata/invalidsvc.yaml: (2.101165837s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (5.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 config get cpus: exit status 14 (71.423778ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 config get cpus: exit status 14 (76.924322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (7.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-463261 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-463261 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 71010: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (7.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463261 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-463261 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (161.591366ms)

                                                
                                                
-- stdout --
	* [functional-463261] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:56:58.808394   69891 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:56:58.808657   69891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:56:58.808666   69891 out.go:374] Setting ErrFile to fd 2...
	I1221 19:56:58.808671   69891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:56:58.808917   69891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:56:58.809406   69891 out.go:368] Setting JSON to false
	I1221 19:56:58.810468   69891 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2368,"bootTime":1766344651,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:56:58.810534   69891 start.go:143] virtualization: kvm guest
	I1221 19:56:58.812008   69891 out.go:179] * [functional-463261] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 19:56:58.813161   69891 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:56:58.813163   69891 notify.go:221] Checking for updates...
	I1221 19:56:58.814380   69891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:56:58.815554   69891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:56:58.816734   69891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:56:58.817841   69891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:56:58.818843   69891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:56:58.820337   69891 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 19:56:58.820832   69891 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:56:58.845465   69891 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:56:58.845614   69891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:56:58.899429   69891 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-21 19:56:58.889933013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:56:58.899530   69891 docker.go:319] overlay module found
	I1221 19:56:58.901043   69891 out.go:179] * Using the docker driver based on existing profile
	I1221 19:56:58.902367   69891 start.go:309] selected driver: docker
	I1221 19:56:58.902383   69891 start.go:928] validating driver "docker" against &{Name:functional-463261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-463261 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:56:58.902491   69891 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:56:58.904281   69891 out.go:203] 
	W1221 19:56:58.905457   69891 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1221 19:56:58.907678   69891 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463261 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463261 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-463261 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (207.159708ms)

                                                
                                                
-- stdout --
	* [functional-463261] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 19:56:58.625928   69643 out.go:360] Setting OutFile to fd 1 ...
	I1221 19:56:58.626037   69643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:56:58.626050   69643 out.go:374] Setting ErrFile to fd 2...
	I1221 19:56:58.626056   69643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 19:56:58.626503   69643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 19:56:58.627476   69643 out.go:368] Setting JSON to false
	I1221 19:56:58.628451   69643 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2368,"bootTime":1766344651,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 19:56:58.628510   69643 start.go:143] virtualization: kvm guest
	I1221 19:56:58.629970   69643 out.go:179] * [functional-463261] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1221 19:56:58.632423   69643 notify.go:221] Checking for updates...
	I1221 19:56:58.632460   69643 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 19:56:58.634421   69643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 19:56:58.636698   69643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 19:56:58.637969   69643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 19:56:58.639168   69643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 19:56:58.640325   69643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 19:56:58.642074   69643 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 19:56:58.642988   69643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 19:56:58.672687   69643 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 19:56:58.672793   69643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 19:56:58.736020   69643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-21 19:56:58.726774011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 19:56:58.736178   69643 docker.go:319] overlay module found
	I1221 19:56:58.737721   69643 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1221 19:56:58.738865   69643 start.go:309] selected driver: docker
	I1221 19:56:58.738876   69643 start.go:928] validating driver "docker" against &{Name:functional-463261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-463261 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1221 19:56:58.738961   69643 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 19:56:58.741798   69643 out.go:203] 
	W1221 19:56:58.743273   69643 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 19:56:58.744289   69643 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-463261 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-463261 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-2tps5" [35a6016e-291f-4476-a8d2-9cc1baf8098a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-2tps5" [35a6016e-291f-4476-a8d2-9cc1baf8098a] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003473275s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30329
functional_test.go:1680: http://192.168.49.2:30329: success! body:
Request served by hello-node-connect-9f67c86d4-2tps5

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30329
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (25.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [7a103b4a-e493-4056-9812-39e905cfa953] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004070693s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-463261 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-463261 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-463261 get pvc myclaim -o=json
I1221 19:56:38.233925   12711 retry.go:84] will retry after 1.3s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:57ce1589-0bd8-4479-92f0-43b1bef4ebf8 ResourceVersion:677 Generation:0 CreationTimestamp:2025-12-21 19:56:38 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a84390 VolumeMode:0xc001a843a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-463261 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-463261 apply -f testdata/storage-provisioner/pod.yaml
I1221 19:56:39.722274   12711 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [9553159b-d4f8-45fb-8c4c-99e4cc257aca] Pending
helpers_test.go:353: "sp-pod" [9553159b-d4f8-45fb-8c4c-99e4cc257aca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [9553159b-d4f8-45fb-8c4c-99e4cc257aca] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.002867165s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-463261 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-463261 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-463261 apply -f testdata/storage-provisioner/pod.yaml
I1221 19:56:51.443672   12711 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8e83321e-fe17-4df7-93ac-ce2ccc794cec] Pending
helpers_test.go:353: "sp-pod" [8e83321e-fe17-4df7-93ac-ce2ccc794cec] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00321841s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-463261 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (25.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh -n functional-463261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cp functional-463261:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2662766231/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh -n functional-463261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh -n functional-463261 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (47.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-463261 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-bgnvg" [3c458268-922d-4b6c-b8db-5907c2f2ea75] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-bgnvg" [3c458268-922d-4b6c-b8db-5907c2f2ea75] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 43.003073099s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-463261 exec mysql-7d7b65bc95-bgnvg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-463261 exec mysql-7d7b65bc95-bgnvg -- mysql -ppassword -e "show databases;": exit status 1 (138.924517ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1221 19:57:14.216746   12711 retry.go:84] will retry after 1.3s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-463261 exec mysql-7d7b65bc95-bgnvg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-463261 exec mysql-7d7b65bc95-bgnvg -- mysql -ppassword -e "show databases;": exit status 1 (83.276039ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-463261 exec mysql-7d7b65bc95-bgnvg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-463261 exec mysql-7d7b65bc95-bgnvg -- mysql -ppassword -e "show databases;": exit status 1 (82.979363ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-463261 exec mysql-7d7b65bc95-bgnvg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (47.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12711/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo cat /etc/test/nested/copy/12711/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo cat /etc/ssl/certs/12711.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo cat /usr/share/ca-certificates/12711.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/127112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo cat /etc/ssl/certs/127112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/127112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo cat /usr/share/ca-certificates/127112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-463261 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh "sudo systemctl is-active docker": exit status 1 (306.552991ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh "sudo systemctl is-active containerd": exit status 1 (315.833593ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463261 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-463261
localhost/kicbase/echo-server:functional-463261
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463261 image ls --format short --alsologtostderr:
I1221 19:57:01.287039   71870 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:01.287357   71870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:01.287369   71870 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:01.287376   71870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:01.287672   71870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:57:01.288446   71870 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:01.288592   71870 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:01.289217   71870 cli_runner.go:164] Run: docker container inspect functional-463261 --format={{.State.Status}}
I1221 19:57:01.314477   71870 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:01.314530   71870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463261
I1221 19:57:01.337812   71870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-463261/id_rsa Username:docker}
I1221 19:57:01.447839   71870 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463261 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-463261                     │ e7f404aef4fc5 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1                          │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1                          │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1                          │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-463261                     │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1                          │ 73f80cdc073da │ 52.8MB │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463261 image ls --format table --alsologtostderr:
I1221 19:57:02.045629   72307 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:02.045718   72307 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:02.045726   72307 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:02.045729   72307 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:02.045931   72307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:57:02.046494   72307 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:02.046614   72307 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:02.047077   72307 cli_runner.go:164] Run: docker container inspect functional-463261 --format={{.State.Status}}
I1221 19:57:02.065252   72307 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:02.065306   72307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463261
I1221 19:57:02.082241   72307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-463261/id_rsa Username:docker}
I1221 19:57:02.178920   72307 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463261 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f93361932
1aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e7f404aef4fc514c094bcf5d98862a277995edd988da4a126030fd3be9d0fc4c","repoDigests":["localhost/minikube-local-cache-test@sha256:0d1740e64ab1be8332caebd4abc51dcf5bbdb04c974e90249f6ee709898f4c5f"],"repoTags":["localhost/minikube-local-cache-test:functional-463261"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14
ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k
8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e
11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"6e38f40d628db3002f56
17342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-463261"],"size":"4943877"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463261 image ls --format json --alsologtostderr:
I1221 19:57:01.808006   72153 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:01.808309   72153 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:01.808320   72153 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:01.808324   72153 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:01.808509   72153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:57:01.809041   72153 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:01.809163   72153 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:01.809745   72153 cli_runner.go:164] Run: docker container inspect functional-463261 --format={{.State.Status}}
I1221 19:57:01.831705   72153 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:01.831750   72153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463261
I1221 19:57:01.850762   72153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-463261/id_rsa Username:docker}
I1221 19:57:01.949511   72153 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463261 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-463261
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e7f404aef4fc514c094bcf5d98862a277995edd988da4a126030fd3be9d0fc4c
repoDigests:
- localhost/minikube-local-cache-test@sha256:0d1740e64ab1be8332caebd4abc51dcf5bbdb04c974e90249f6ee709898f4c5f
repoTags:
- localhost/minikube-local-cache-test:functional-463261
size: "3330"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463261 image ls --format yaml --alsologtostderr:
I1221 19:57:02.238280   72380 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:02.238566   72380 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:02.238577   72380 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:02.238583   72380 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:02.238857   72380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:57:02.239573   72380 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:02.239717   72380 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:02.240327   72380 cli_runner.go:164] Run: docker container inspect functional-463261 --format={{.State.Status}}
I1221 19:57:02.260946   72380 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:02.261012   72380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463261
I1221 19:57:02.282966   72380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-463261/id_rsa Username:docker}
I1221 19:57:02.383756   72380 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh pgrep buildkitd: exit status 1 (295.612917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image build -t localhost/my-image:functional-463261 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-463261 image build -t localhost/my-image:functional-463261 testdata/build --alsologtostderr: (2.579248409s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463261 image build -t localhost/my-image:functional-463261 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7a42d9dbd2c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-463261
--> 3bae710a429
Successfully tagged localhost/my-image:functional-463261
3bae710a42935aeb5329a5c5d59bd9819235af5a9c41aeb76ab8dcd683e36182
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463261 image build -t localhost/my-image:functional-463261 testdata/build --alsologtostderr:
I1221 19:57:02.587137   72561 out.go:360] Setting OutFile to fd 1 ...
I1221 19:57:02.587291   72561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:02.587302   72561 out.go:374] Setting ErrFile to fd 2...
I1221 19:57:02.587309   72561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1221 19:57:02.587619   72561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
I1221 19:57:02.588313   72561 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:02.589090   72561 config.go:182] Loaded profile config "functional-463261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1221 19:57:02.589705   72561 cli_runner.go:164] Run: docker container inspect functional-463261 --format={{.State.Status}}
I1221 19:57:02.612534   72561 ssh_runner.go:195] Run: systemctl --version
I1221 19:57:02.612592   72561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463261
I1221 19:57:02.632916   72561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/functional-463261/id_rsa Username:docker}
I1221 19:57:02.740069   72561 build_images.go:162] Building image from path: /tmp/build.3180234533.tar
I1221 19:57:02.740146   72561 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1221 19:57:02.750165   72561 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3180234533.tar
I1221 19:57:02.754371   72561 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3180234533.tar: stat -c "%s %y" /var/lib/minikube/build/build.3180234533.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3180234533.tar': No such file or directory
I1221 19:57:02.754404   72561 ssh_runner.go:362] scp /tmp/build.3180234533.tar --> /var/lib/minikube/build/build.3180234533.tar (3072 bytes)
I1221 19:57:02.776419   72561 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3180234533
I1221 19:57:02.785849   72561 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3180234533 -xf /var/lib/minikube/build/build.3180234533.tar
I1221 19:57:02.794917   72561 crio.go:315] Building image: /var/lib/minikube/build/build.3180234533
I1221 19:57:02.794986   72561 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-463261 /var/lib/minikube/build/build.3180234533 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1221 19:57:05.071832   72561 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-463261 /var/lib/minikube/build/build.3180234533 --cgroup-manager=cgroupfs: (2.276816947s)
I1221 19:57:05.071883   72561 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3180234533
I1221 19:57:05.080140   72561 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3180234533.tar
I1221 19:57:05.087206   72561 build_images.go:218] Built localhost/my-image:functional-463261 from /tmp/build.3180234533.tar
I1221 19:57:05.087252   72561 build_images.go:134] succeeded building to: functional-463261
I1221 19:57:05.087258   72561 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls
2025/12/21 19:57:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-463261
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image load --daemon kicbase/echo-server:functional-463261 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-463261 image load --daemon kicbase/echo-server:functional-463261 --alsologtostderr: (1.094818353s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image load --daemon kicbase/echo-server:functional-463261 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-463261 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-463261 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-463261 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 65729: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-463261 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-463261
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image load --daemon kicbase/echo-server:functional-463261 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-463261 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (15.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-463261 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [5282ea99-e30e-4f1d-af33-2fff861c8e93] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [5282ea99-e30e-4f1d-af33-2fff861c8e93] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.003696815s
I1221 19:56:48.404524   12711 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (15.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image save kicbase/echo-server:functional-463261 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image rm kicbase/echo-server:functional-463261 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (2.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-463261 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.169857975s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (2.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (3.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-463261
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 image save --daemon kicbase/echo-server:functional-463261 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-463261 image save --daemon kicbase/echo-server:functional-463261 --alsologtostderr: (3.754631779s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-463261
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (3.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-463261 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.42.237 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-463261 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-463261 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-463261 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-pqsrz" [cc99de97-0b74-48c8-be9f-143c2c2bd935] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-pqsrz" [cc99de97-0b74-48c8-be9f-143c2c2bd935] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003645563s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "325.264528ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.13388ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "321.147307ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.666ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (5.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun176628175/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766347012870346999" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun176628175/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766347012870346999" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun176628175/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766347012870346999" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun176628175/001/test-1766347012870346999
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.143798ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1221 19:56:53.137826   12711 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 21 19:56 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 21 19:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 21 19:56 test-1766347012870346999
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh cat /mount-9p/test-1766347012870346999
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-463261 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [3654159d-1b00-4696-96d1-c9b88522eb35] Pending
helpers_test.go:353: "busybox-mount" [3654159d-1b00-4696-96d1-c9b88522eb35] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [3654159d-1b00-4696-96d1-c9b88522eb35] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [3654159d-1b00-4696-96d1-c9b88522eb35] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002547292s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-463261 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun176628175/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (5.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 service list -o json
functional_test.go:1504: Took "893.659675ms" to run "out/minikube-linux-amd64 -p functional-463261 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32214
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4122384315/001:/mount-9p --alsologtostderr -v=1 --port 46811]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.491107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1221 19:56:58.751059   12711 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4122384315/001:/mount-9p --alsologtostderr -v=1 --port 46811] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh "sudo umount -f /mount-9p": exit status 1 (299.852688ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-463261 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4122384315/001:/mount-9p --alsologtostderr -v=1 --port 46811] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32214
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1828899378/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1828899378/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1828899378/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T" /mount1: exit status 1 (367.884706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-463261 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-463261 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1828899378/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1828899378/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463261 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1828899378/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-463261
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-463261
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-463261
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (155.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1221 19:58:00.986141   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:28.672154   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.255071   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.260401   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.270675   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.290995   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.331367   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.411832   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.572331   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:48.892950   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:49.533856   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:50.814089   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:53.375284   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:58:58.496332   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:59:08.737096   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 19:59:29.218206   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m34.417328875s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (155.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 kubectl -- rollout status deployment/busybox: (1.537710314s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-42dmv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-44blb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-wlnwk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-42dmv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-44blb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-wlnwk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-42dmv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-44blb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-wlnwk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-42dmv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-42dmv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-44blb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-44blb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-wlnwk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 kubectl -- exec busybox-7b57f96db7-wlnwk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 node add --alsologtostderr -v 5
E1221 20:00:10.178647   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 node add --alsologtostderr -v 5: (26.976512738s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-927867 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp testdata/cp-test.txt ha-927867:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile900678888/001/cp-test_ha-927867.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867:/home/docker/cp-test.txt ha-927867-m02:/home/docker/cp-test_ha-927867_ha-927867-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test_ha-927867_ha-927867-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867:/home/docker/cp-test.txt ha-927867-m03:/home/docker/cp-test_ha-927867_ha-927867-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test_ha-927867_ha-927867-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867:/home/docker/cp-test.txt ha-927867-m04:/home/docker/cp-test_ha-927867_ha-927867-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test_ha-927867_ha-927867-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp testdata/cp-test.txt ha-927867-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile900678888/001/cp-test_ha-927867-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m02:/home/docker/cp-test.txt ha-927867:/home/docker/cp-test_ha-927867-m02_ha-927867.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test_ha-927867-m02_ha-927867.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m02:/home/docker/cp-test.txt ha-927867-m03:/home/docker/cp-test_ha-927867-m02_ha-927867-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test_ha-927867-m02_ha-927867-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m02:/home/docker/cp-test.txt ha-927867-m04:/home/docker/cp-test_ha-927867-m02_ha-927867-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test_ha-927867-m02_ha-927867-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp testdata/cp-test.txt ha-927867-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile900678888/001/cp-test_ha-927867-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m03:/home/docker/cp-test.txt ha-927867:/home/docker/cp-test_ha-927867-m03_ha-927867.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test_ha-927867-m03_ha-927867.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m03:/home/docker/cp-test.txt ha-927867-m02:/home/docker/cp-test_ha-927867-m03_ha-927867-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test_ha-927867-m03_ha-927867-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m03:/home/docker/cp-test.txt ha-927867-m04:/home/docker/cp-test_ha-927867-m03_ha-927867-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test_ha-927867-m03_ha-927867-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp testdata/cp-test.txt ha-927867-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile900678888/001/cp-test_ha-927867-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m04:/home/docker/cp-test.txt ha-927867:/home/docker/cp-test_ha-927867-m04_ha-927867.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867 "sudo cat /home/docker/cp-test_ha-927867-m04_ha-927867.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m04:/home/docker/cp-test.txt ha-927867-m02:/home/docker/cp-test_ha-927867-m04_ha-927867-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m02 "sudo cat /home/docker/cp-test_ha-927867-m04_ha-927867-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 cp ha-927867-m04:/home/docker/cp-test.txt ha-927867-m03:/home/docker/cp-test_ha-927867-m04_ha-927867-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 ssh -n ha-927867-m03 "sudo cat /home/docker/cp-test_ha-927867-m04_ha-927867-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 node stop m02 --alsologtostderr -v 5: (13.959471627s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5: exit status 7 (670.851582ms)

                                                
                                                
-- stdout --
	ha-927867
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-927867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927867-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-927867-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:01:00.811709   93240 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:01:00.811949   93240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:01:00.811956   93240 out.go:374] Setting ErrFile to fd 2...
	I1221 20:01:00.811960   93240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:01:00.812171   93240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:01:00.812356   93240 out.go:368] Setting JSON to false
	I1221 20:01:00.812383   93240 mustload.go:66] Loading cluster: ha-927867
	I1221 20:01:00.812511   93240 notify.go:221] Checking for updates...
	I1221 20:01:00.812795   93240 config.go:182] Loaded profile config "ha-927867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:01:00.812812   93240 status.go:174] checking status of ha-927867 ...
	I1221 20:01:00.813375   93240 cli_runner.go:164] Run: docker container inspect ha-927867 --format={{.State.Status}}
	I1221 20:01:00.833298   93240 status.go:371] ha-927867 host status = "Running" (err=<nil>)
	I1221 20:01:00.833340   93240 host.go:66] Checking if "ha-927867" exists ...
	I1221 20:01:00.833734   93240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927867
	I1221 20:01:00.851526   93240 host.go:66] Checking if "ha-927867" exists ...
	I1221 20:01:00.851774   93240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:01:00.851831   93240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927867
	I1221 20:01:00.869474   93240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/ha-927867/id_rsa Username:docker}
	I1221 20:01:00.963997   93240 ssh_runner.go:195] Run: systemctl --version
	I1221 20:01:00.969896   93240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:01:00.981062   93240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:01:01.036329   93240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-21 20:01:01.02675774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:01:01.036912   93240 kubeconfig.go:125] found "ha-927867" server: "https://192.168.49.254:8443"
	I1221 20:01:01.036941   93240 api_server.go:166] Checking apiserver status ...
	I1221 20:01:01.036990   93240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:01:01.048296   93240 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1263/cgroup
	W1221 20:01:01.056343   93240 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1263/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:01:01.056388   93240 ssh_runner.go:195] Run: ls
	I1221 20:01:01.059850   93240 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1221 20:01:01.063703   93240 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1221 20:01:01.063722   93240 status.go:463] ha-927867 apiserver status = Running (err=<nil>)
	I1221 20:01:01.063731   93240 status.go:176] ha-927867 status: &{Name:ha-927867 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:01:01.063746   93240 status.go:174] checking status of ha-927867-m02 ...
	I1221 20:01:01.063975   93240 cli_runner.go:164] Run: docker container inspect ha-927867-m02 --format={{.State.Status}}
	I1221 20:01:01.081210   93240 status.go:371] ha-927867-m02 host status = "Stopped" (err=<nil>)
	I1221 20:01:01.081251   93240 status.go:384] host is not running, skipping remaining checks
	I1221 20:01:01.081266   93240 status.go:176] ha-927867-m02 status: &{Name:ha-927867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:01:01.081291   93240 status.go:174] checking status of ha-927867-m03 ...
	I1221 20:01:01.081538   93240 cli_runner.go:164] Run: docker container inspect ha-927867-m03 --format={{.State.Status}}
	I1221 20:01:01.098084   93240 status.go:371] ha-927867-m03 host status = "Running" (err=<nil>)
	I1221 20:01:01.098106   93240 host.go:66] Checking if "ha-927867-m03" exists ...
	I1221 20:01:01.098394   93240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927867-m03
	I1221 20:01:01.115116   93240 host.go:66] Checking if "ha-927867-m03" exists ...
	I1221 20:01:01.115384   93240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:01:01.115420   93240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927867-m03
	I1221 20:01:01.131846   93240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/ha-927867-m03/id_rsa Username:docker}
	I1221 20:01:01.225329   93240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:01:01.237482   93240 kubeconfig.go:125] found "ha-927867" server: "https://192.168.49.254:8443"
	I1221 20:01:01.237507   93240 api_server.go:166] Checking apiserver status ...
	I1221 20:01:01.237538   93240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:01:01.247745   93240 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W1221 20:01:01.255376   93240 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:01:01.255435   93240 ssh_runner.go:195] Run: ls
	I1221 20:01:01.258696   93240 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1221 20:01:01.262643   93240 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1221 20:01:01.262664   93240 status.go:463] ha-927867-m03 apiserver status = Running (err=<nil>)
	I1221 20:01:01.262674   93240 status.go:176] ha-927867-m03 status: &{Name:ha-927867-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:01:01.262698   93240 status.go:174] checking status of ha-927867-m04 ...
	I1221 20:01:01.262919   93240 cli_runner.go:164] Run: docker container inspect ha-927867-m04 --format={{.State.Status}}
	I1221 20:01:01.280980   93240 status.go:371] ha-927867-m04 host status = "Running" (err=<nil>)
	I1221 20:01:01.281000   93240 host.go:66] Checking if "ha-927867-m04" exists ...
	I1221 20:01:01.281246   93240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-927867-m04
	I1221 20:01:01.298746   93240 host.go:66] Checking if "ha-927867-m04" exists ...
	I1221 20:01:01.299017   93240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:01:01.299076   93240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-927867-m04
	I1221 20:01:01.316877   93240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32804 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/ha-927867-m04/id_rsa Username:docker}
	I1221 20:01:01.411749   93240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:01:01.424207   93240 status.go:176] ha-927867-m04 status: &{Name:ha-927867-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 node start m02 --alsologtostderr -v 5: (7.628364979s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 stop --alsologtostderr -v 5
E1221 20:01:31.074986   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:31.081395   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:31.091780   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:31.112111   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:31.152425   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:31.232883   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:31.393493   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:31.714462   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:32.098905   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:32.355305   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:33.636456   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:36.197704   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:01:41.318255   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 stop --alsologtostderr -v 5: (35.12625952s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 start --wait true --alsologtostderr -v 5
E1221 20:01:51.558800   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:02:12.039970   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 start --wait true --alsologtostderr -v 5: (1m0.904448567s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 node delete m03 --alsologtostderr -v 5
E1221 20:02:53.000692   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 node delete m03 --alsologtostderr -v 5: (9.773819026s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 stop --alsologtostderr -v 5
E1221 20:03:00.985325   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 stop --alsologtostderr -v 5: (48.373240179s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5: exit status 7 (117.843021ms)

                                                
                                                
-- stdout --
	ha-927867
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-927867-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:03:47.403987  107398 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:03:47.404115  107398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:03:47.404126  107398 out.go:374] Setting ErrFile to fd 2...
	I1221 20:03:47.404132  107398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:03:47.404436  107398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:03:47.404621  107398 out.go:368] Setting JSON to false
	I1221 20:03:47.404648  107398 mustload.go:66] Loading cluster: ha-927867
	I1221 20:03:47.404811  107398 notify.go:221] Checking for updates...
	I1221 20:03:47.404983  107398 config.go:182] Loaded profile config "ha-927867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:03:47.404998  107398 status.go:174] checking status of ha-927867 ...
	I1221 20:03:47.405452  107398 cli_runner.go:164] Run: docker container inspect ha-927867 --format={{.State.Status}}
	I1221 20:03:47.424294  107398 status.go:371] ha-927867 host status = "Stopped" (err=<nil>)
	I1221 20:03:47.424320  107398 status.go:384] host is not running, skipping remaining checks
	I1221 20:03:47.424329  107398 status.go:176] ha-927867 status: &{Name:ha-927867 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:03:47.424357  107398 status.go:174] checking status of ha-927867-m02 ...
	I1221 20:03:47.424705  107398 cli_runner.go:164] Run: docker container inspect ha-927867-m02 --format={{.State.Status}}
	I1221 20:03:47.444631  107398 status.go:371] ha-927867-m02 host status = "Stopped" (err=<nil>)
	I1221 20:03:47.444660  107398 status.go:384] host is not running, skipping remaining checks
	I1221 20:03:47.444671  107398 status.go:176] ha-927867-m02 status: &{Name:ha-927867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:03:47.444696  107398 status.go:174] checking status of ha-927867-m04 ...
	I1221 20:03:47.444932  107398 cli_runner.go:164] Run: docker container inspect ha-927867-m04 --format={{.State.Status}}
	I1221 20:03:47.460939  107398 status.go:371] ha-927867-m04 host status = "Stopped" (err=<nil>)
	I1221 20:03:47.461005  107398 status.go:384] host is not running, skipping remaining checks
	I1221 20:03:47.461014  107398 status.go:176] ha-927867-m04 status: &{Name:ha-927867-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1221 20:03:48.255343   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:04:14.921382   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:04:15.939640   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.700528409s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-927867 node add --control-plane --alsologtostderr -v 5: (38.269465194s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-927867 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-783448 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-783448 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.591579973s)
--- PASS: TestJSONOutput/start/Command (37.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-783448 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-783448 --output=json --user=testUser: (7.961261128s)
--- PASS: TestJSONOutput/stop/Command (7.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-061005 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-061005 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.91365ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"643b8927-abd3-4a9e-9914-49ce1586219d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-061005] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8c8b86b-4bf0-4d63-86c0-266bdab524e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22179"}}
	{"specversion":"1.0","id":"b73110af-4a4d-482b-b9af-c6773e4cd0b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b33ff14c-7f38-4588-a11f-35c2de4a3ff3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig"}}
	{"specversion":"1.0","id":"7247deaf-25da-454d-8fe3-bd96002ce71b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube"}}
	{"specversion":"1.0","id":"1214a1d5-ba06-4cfa-b938-fd87696bcf05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6d0e40cf-81b4-4e71-a8f3-4541a9bc0ecc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"584616fb-fad4-4773-aff4-03f770ce0a29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-061005" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-061005
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-584106 --network=
E1221 20:06:31.074881   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-584106 --network=: (23.668758485s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-584106" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-584106
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-584106: (2.094036919s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-385527 --network=bridge
E1221 20:06:58.762422   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-385527 --network=bridge: (21.141259661s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-385527" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-385527
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-385527: (1.982051217s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.14s)

                                                
                                    
x
+
TestKicExistingNetwork (23.19s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1221 20:07:14.891776   12711 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1221 20:07:14.907957   12711 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1221 20:07:14.908037   12711 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1221 20:07:14.908071   12711 cli_runner.go:164] Run: docker network inspect existing-network
W1221 20:07:14.924538   12711 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1221 20:07:14.924571   12711 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1221 20:07:14.924595   12711 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1221 20:07:14.924759   12711 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1221 20:07:14.942144   12711 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f29a930c06e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:8b:29:89:af:bd} reservation:<nil>}
I1221 20:07:14.942592   12711 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019bede0}
I1221 20:07:14.942619   12711 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1221 20:07:14.942682   12711 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1221 20:07:14.986438   12711 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-459337 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-459337 --network=existing-network: (21.091056286s)
helpers_test.go:176: Cleaning up "existing-network-459337" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-459337
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-459337: (1.971438156s)
I1221 20:07:38.065448   12711 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.19s)

                                                
                                    
x
+
TestKicCustomSubnet (25.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-843460 --subnet=192.168.60.0/24
E1221 20:08:00.985394   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-843460 --subnet=192.168.60.0/24: (23.141426138s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-843460 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-843460" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-843460
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-843460: (2.080218731s)
--- PASS: TestKicCustomSubnet (25.24s)

                                                
                                    
x
+
TestKicStaticIP (26.05s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-395840 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-395840 --static-ip=192.168.200.200: (23.799232585s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-395840 ip
helpers_test.go:176: Cleaning up "static-ip-395840" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-395840
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-395840: (2.106918787s)
--- PASS: TestKicStaticIP (26.05s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-694667 --driver=docker  --container-runtime=crio
E1221 20:08:48.260380   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-694667 --driver=docker  --container-runtime=crio: (23.019984236s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-698444 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-698444 --driver=docker  --container-runtime=crio: (18.460537853s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-694667
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-698444
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-698444" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-698444
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-698444: (2.298209494s)
helpers_test.go:176: Cleaning up "first-694667" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-694667
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-694667: (2.338469154s)
--- PASS: TestMinikubeProfile (47.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-692896 --memory=3072 --mount-string /tmp/TestMountStartserial2108911893/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-692896 --memory=3072 --mount-string /tmp/TestMountStartserial2108911893/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.690713124s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-692896 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-708366 --memory=3072 --mount-string /tmp/TestMountStartserial2108911893/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1221 20:09:24.033385   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-708366 --memory=3072 --mount-string /tmp/TestMountStartserial2108911893/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.685328296s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708366 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-692896 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-692896 --alsologtostderr -v=5: (1.663995734s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708366 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-708366
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-708366: (1.242062893s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-708366
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-708366: (6.101850376s)
--- PASS: TestMountStart/serial/RestartStopped (7.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708366 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-120839 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-120839 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.82173308s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-120839 -- rollout status deployment/busybox: (1.670894402s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-ffzmj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-n6pd7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-ffzmj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-n6pd7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-ffzmj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-n6pd7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-ffzmj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-ffzmj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-n6pd7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-120839 -- exec busybox-7b57f96db7-n6pd7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-120839 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-120839 -v=5 --alsologtostderr: (26.604746294s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-120839 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp testdata/cp-test.txt multinode-120839:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2567080022/001/cp-test_multinode-120839.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839:/home/docker/cp-test.txt multinode-120839-m02:/home/docker/cp-test_multinode-120839_multinode-120839-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m02 "sudo cat /home/docker/cp-test_multinode-120839_multinode-120839-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839:/home/docker/cp-test.txt multinode-120839-m03:/home/docker/cp-test_multinode-120839_multinode-120839-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m03 "sudo cat /home/docker/cp-test_multinode-120839_multinode-120839-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp testdata/cp-test.txt multinode-120839-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2567080022/001/cp-test_multinode-120839-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839-m02:/home/docker/cp-test.txt multinode-120839:/home/docker/cp-test_multinode-120839-m02_multinode-120839.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839 "sudo cat /home/docker/cp-test_multinode-120839-m02_multinode-120839.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839-m02:/home/docker/cp-test.txt multinode-120839-m03:/home/docker/cp-test_multinode-120839-m02_multinode-120839-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m03 "sudo cat /home/docker/cp-test_multinode-120839-m02_multinode-120839-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp testdata/cp-test.txt multinode-120839-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2567080022/001/cp-test_multinode-120839-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839-m03:/home/docker/cp-test.txt multinode-120839:/home/docker/cp-test_multinode-120839-m03_multinode-120839.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839 "sudo cat /home/docker/cp-test_multinode-120839-m03_multinode-120839.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 cp multinode-120839-m03:/home/docker/cp-test.txt multinode-120839-m02:/home/docker/cp-test_multinode-120839-m03_multinode-120839-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 ssh -n multinode-120839-m02 "sudo cat /home/docker/cp-test_multinode-120839-m03_multinode-120839-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-120839 node stop m03: (1.255264393s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-120839 status: exit status 7 (479.872057ms)

                                                
                                                
-- stdout --
	multinode-120839
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-120839-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-120839-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr: exit status 7 (480.341371ms)

                                                
                                                
-- stdout --
	multinode-120839
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-120839-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-120839-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:11:28.399577  167494 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:11:28.399682  167494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:11:28.399690  167494 out.go:374] Setting ErrFile to fd 2...
	I1221 20:11:28.399694  167494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:11:28.399902  167494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:11:28.400084  167494 out.go:368] Setting JSON to false
	I1221 20:11:28.400109  167494 mustload.go:66] Loading cluster: multinode-120839
	I1221 20:11:28.400243  167494 notify.go:221] Checking for updates...
	I1221 20:11:28.400906  167494 config.go:182] Loaded profile config "multinode-120839": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:11:28.400946  167494 status.go:174] checking status of multinode-120839 ...
	I1221 20:11:28.402070  167494 cli_runner.go:164] Run: docker container inspect multinode-120839 --format={{.State.Status}}
	I1221 20:11:28.420747  167494 status.go:371] multinode-120839 host status = "Running" (err=<nil>)
	I1221 20:11:28.420782  167494 host.go:66] Checking if "multinode-120839" exists ...
	I1221 20:11:28.421102  167494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-120839
	I1221 20:11:28.438719  167494 host.go:66] Checking if "multinode-120839" exists ...
	I1221 20:11:28.438960  167494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:11:28.439006  167494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-120839
	I1221 20:11:28.456366  167494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/multinode-120839/id_rsa Username:docker}
	I1221 20:11:28.550122  167494 ssh_runner.go:195] Run: systemctl --version
	I1221 20:11:28.556591  167494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:11:28.568164  167494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:11:28.623125  167494 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-21 20:11:28.614150425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:11:28.623674  167494 kubeconfig.go:125] found "multinode-120839" server: "https://192.168.67.2:8443"
	I1221 20:11:28.623703  167494 api_server.go:166] Checking apiserver status ...
	I1221 20:11:28.623745  167494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 20:11:28.634850  167494 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	W1221 20:11:28.642795  167494 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1221 20:11:28.642845  167494 ssh_runner.go:195] Run: ls
	I1221 20:11:28.646761  167494 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1221 20:11:28.650938  167494 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1221 20:11:28.650959  167494 status.go:463] multinode-120839 apiserver status = Running (err=<nil>)
	I1221 20:11:28.650968  167494 status.go:176] multinode-120839 status: &{Name:multinode-120839 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:11:28.650985  167494 status.go:174] checking status of multinode-120839-m02 ...
	I1221 20:11:28.651208  167494 cli_runner.go:164] Run: docker container inspect multinode-120839-m02 --format={{.State.Status}}
	I1221 20:11:28.668668  167494 status.go:371] multinode-120839-m02 host status = "Running" (err=<nil>)
	I1221 20:11:28.668688  167494 host.go:66] Checking if "multinode-120839-m02" exists ...
	I1221 20:11:28.669000  167494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-120839-m02
	I1221 20:11:28.685295  167494 host.go:66] Checking if "multinode-120839-m02" exists ...
	I1221 20:11:28.685537  167494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 20:11:28.685572  167494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-120839-m02
	I1221 20:11:28.701390  167494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/22179-9159/.minikube/machines/multinode-120839-m02/id_rsa Username:docker}
	I1221 20:11:28.793982  167494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 20:11:28.805668  167494 status.go:176] multinode-120839-m02 status: &{Name:multinode-120839-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:11:28.805701  167494 status.go:174] checking status of multinode-120839-m03 ...
	I1221 20:11:28.805931  167494 cli_runner.go:164] Run: docker container inspect multinode-120839-m03 --format={{.State.Status}}
	I1221 20:11:28.823088  167494 status.go:371] multinode-120839-m03 host status = "Stopped" (err=<nil>)
	I1221 20:11:28.823108  167494 status.go:384] host is not running, skipping remaining checks
	I1221 20:11:28.823114  167494 status.go:176] multinode-120839-m03 status: &{Name:multinode-120839-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 node start m03 -v=5 --alsologtostderr
E1221 20:11:31.075370   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-120839 node start m03 -v=5 --alsologtostderr: (6.408273427s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-120839
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-120839
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-120839: (31.223604294s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-120839 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-120839 --wait=true -v=5 --alsologtostderr: (49.144612372s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-120839
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 node delete m03
E1221 20:13:00.985272   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-120839 node delete m03: (4.611413445s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-120839 stop: (30.556408181s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-120839 status: exit status 7 (97.515791ms)

                                                
                                                
-- stdout --
	multinode-120839
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-120839-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr: exit status 7 (97.194251ms)

                                                
                                                
-- stdout --
	multinode-120839
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-120839-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:13:32.291195  177296 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:13:32.291320  177296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:13:32.291329  177296 out.go:374] Setting ErrFile to fd 2...
	I1221 20:13:32.291333  177296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:13:32.291496  177296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:13:32.291667  177296 out.go:368] Setting JSON to false
	I1221 20:13:32.291693  177296 mustload.go:66] Loading cluster: multinode-120839
	I1221 20:13:32.291740  177296 notify.go:221] Checking for updates...
	I1221 20:13:32.292047  177296 config.go:182] Loaded profile config "multinode-120839": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:13:32.292064  177296 status.go:174] checking status of multinode-120839 ...
	I1221 20:13:32.292470  177296 cli_runner.go:164] Run: docker container inspect multinode-120839 --format={{.State.Status}}
	I1221 20:13:32.313693  177296 status.go:371] multinode-120839 host status = "Stopped" (err=<nil>)
	I1221 20:13:32.313775  177296 status.go:384] host is not running, skipping remaining checks
	I1221 20:13:32.313785  177296 status.go:176] multinode-120839 status: &{Name:multinode-120839 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 20:13:32.313842  177296 status.go:174] checking status of multinode-120839-m02 ...
	I1221 20:13:32.314124  177296 cli_runner.go:164] Run: docker container inspect multinode-120839-m02 --format={{.State.Status}}
	I1221 20:13:32.332435  177296 status.go:371] multinode-120839-m02 host status = "Stopped" (err=<nil>)
	I1221 20:13:32.332455  177296 status.go:384] host is not running, skipping remaining checks
	I1221 20:13:32.332463  177296 status.go:176] multinode-120839-m02 status: &{Name:multinode-120839-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-120839 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1221 20:13:48.255113   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-120839 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (43.851839115s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-120839 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-120839
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-120839-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-120839-m02 --driver=docker  --container-runtime=crio: exit status 14 (72.242066ms)

                                                
                                                
-- stdout --
	* [multinode-120839-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-120839-m02' is duplicated with machine name 'multinode-120839-m02' in profile 'multinode-120839'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-120839-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-120839-m03 --driver=docker  --container-runtime=crio: (22.148093738s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-120839
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-120839: exit status 80 (277.437633ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-120839 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-120839-m03 already exists in multinode-120839-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-120839-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-120839-m03: (2.313089859s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.87s)

                                                
                                    
x
+
TestScheduledStopUnix (97.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-451588 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-451588 --memory=3072 --driver=docker  --container-runtime=crio: (21.910294577s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-451588 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1221 20:15:07.753759  187212 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:15:07.753979  187212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:15:07.753987  187212 out.go:374] Setting ErrFile to fd 2...
	I1221 20:15:07.753991  187212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:15:07.754193  187212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:15:07.754436  187212 out.go:368] Setting JSON to false
	I1221 20:15:07.754537  187212 mustload.go:66] Loading cluster: scheduled-stop-451588
	I1221 20:15:07.754823  187212 config.go:182] Loaded profile config "scheduled-stop-451588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:15:07.754881  187212 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/scheduled-stop-451588/config.json ...
	I1221 20:15:07.755052  187212 mustload.go:66] Loading cluster: scheduled-stop-451588
	I1221 20:15:07.755142  187212 config.go:182] Loaded profile config "scheduled-stop-451588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-451588 -n scheduled-stop-451588
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-451588 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1221 20:15:08.137181  187363 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:15:08.137472  187363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:15:08.137480  187363 out.go:374] Setting ErrFile to fd 2...
	I1221 20:15:08.137484  187363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:15:08.137683  187363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:15:08.137964  187363 out.go:368] Setting JSON to false
	I1221 20:15:08.138167  187363 daemonize_unix.go:73] killing process 187245 as it is an old scheduled stop
	I1221 20:15:08.138296  187363 mustload.go:66] Loading cluster: scheduled-stop-451588
	I1221 20:15:08.138629  187363 config.go:182] Loaded profile config "scheduled-stop-451588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:15:08.138697  187363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/scheduled-stop-451588/config.json ...
	I1221 20:15:08.138871  187363 mustload.go:66] Loading cluster: scheduled-stop-451588
	I1221 20:15:08.138994  187363 config.go:182] Loaded profile config "scheduled-stop-451588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1221 20:15:08.142907   12711 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/scheduled-stop-451588/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-451588 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1221 20:15:11.300034   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-451588 -n scheduled-stop-451588
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-451588
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-451588 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1221 20:15:33.978092  188063 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:15:33.978355  188063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:15:33.978365  188063 out.go:374] Setting ErrFile to fd 2...
	I1221 20:15:33.978370  188063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:15:33.978581  188063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:15:33.978814  188063 out.go:368] Setting JSON to false
	I1221 20:15:33.978882  188063 mustload.go:66] Loading cluster: scheduled-stop-451588
	I1221 20:15:33.979153  188063 config.go:182] Loaded profile config "scheduled-stop-451588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:15:33.979240  188063 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/scheduled-stop-451588/config.json ...
	I1221 20:15:33.979426  188063 mustload.go:66] Loading cluster: scheduled-stop-451588
	I1221 20:15:33.979526  188063 config.go:182] Loaded profile config "scheduled-stop-451588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-451588
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-451588: exit status 7 (79.147189ms)

                                                
                                                
-- stdout --
	scheduled-stop-451588
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-451588 -n scheduled-stop-451588
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-451588 -n scheduled-stop-451588: exit status 7 (75.170592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-451588" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-451588
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-451588: (4.078982977s)
--- PASS: TestScheduledStopUnix (97.43s)

                                                
                                    
x
+
TestInsufficientStorage (8.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-422194 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-422194 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.252501719s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dfe0910c-4a13-4b5c-83c1-54eeac16b8dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-422194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3abe9054-d386-4fc2-9d0f-d0f127189534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22179"}}
	{"specversion":"1.0","id":"9ceb1b5e-37f0-4e98-8aab-4fb6652df067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c9f7a63c-6e1e-4c67-8cd5-af580187d202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig"}}
	{"specversion":"1.0","id":"8b682095-68e4-4dd3-bf67-cdbee7ae1c06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube"}}
	{"specversion":"1.0","id":"628527a2-c163-4488-bd87-81ed98cf8043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c7c36483-d6d1-493d-833b-ae4181b4ca4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"46c3eb44-ed4f-4665-b189-daacec4aa73f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5846c866-4343-48c0-acec-a7430753ae21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"66a40f94-eb28-46ac-bb42-88fda970a393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"372a1497-ebf7-46c8-af57-9c9d83b9c0b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fdc37388-7038-449b-8b66-8ddba5aea734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-422194\" primary control-plane node in \"insufficient-storage-422194\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5250300f-3536-45bc-adcd-a804eacf5bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766219634-22260 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"70b228cf-fe4e-4c6b-8256-1164adab7a7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d011695-386f-4792-81b4-3b49197ffed0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-422194 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-422194 --output=json --layout=cluster: exit status 7 (280.042877ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-422194","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-422194","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1221 20:16:29.737968  190574 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-422194" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-422194 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-422194 --output=json --layout=cluster: exit status 7 (275.900221ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-422194","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-422194","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1221 20:16:30.015068  190687 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-422194" does not appear in /home/jenkins/minikube-integration/22179-9159/kubeconfig
	E1221 20:16:30.025057  190687 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/insufficient-storage-422194/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-422194" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-422194
E1221 20:16:31.074982   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-422194: (1.886248519s)
--- PASS: TestInsufficientStorage (8.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (322.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3258610859 start -p running-upgrade-707221 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1221 20:18:48.256189   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3258610859 start -p running-upgrade-707221 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.911522589s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-707221 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-707221 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m57.160838664s)
helpers_test.go:176: Cleaning up "running-upgrade-707221" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-707221
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-707221: (2.971504004s)
--- PASS: TestRunningBinaryUpgrade (322.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (135.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.754043527s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-291108
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-291108: (1.985345908s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-291108 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-291108 status --format={{.Host}}: exit status 7 (90.078456ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m44.970765627s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-291108 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.640005ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-291108] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-291108
	    minikube start -p kubernetes-upgrade-291108 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2911082 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-291108 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-291108 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4.745063439s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-291108" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-291108
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-291108: (2.440770454s)
--- PASS: TestKubernetesUpgrade (135.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (89.88s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3747954650 start -p missing-upgrade-621797 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3747954650 start -p missing-upgrade-621797 --memory=3072 --driver=docker  --container-runtime=crio: (41.886215279s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-621797
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-621797: (10.516060506s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-621797
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-621797 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-621797 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.389951486s)
helpers_test.go:176: Cleaning up "missing-upgrade-621797" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-621797
E1221 20:18:00.984722   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-621797: (2.319934687s)
--- PASS: TestMissingContainerUpgrade (89.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-648134 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-648134 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (79.801426ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-648134] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-648134 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-648134 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.627820632s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-648134 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.131684104 start -p stopped-upgrade-611850 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.131684104 start -p stopped-upgrade-611850 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.821768025s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.131684104 -p stopped-upgrade-611850 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.131684104 -p stopped-upgrade-611850 stop: (2.117843605s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-611850 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-611850 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m19.338044595s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-648134 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-648134 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.207684097s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-648134 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-648134 status -o json: exit status 2 (349.303683ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-648134","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-648134
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-648134: (2.059394925s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-648134 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-648134 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.163475907s)
--- PASS: TestNoKubernetes/serial/Start (8.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22179-9159/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-648134 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-648134 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.869778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
E1221 20:17:54.123064   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (18.652958283s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (19.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-648134
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-648134: (1.352810797s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-648134 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-648134 --driver=docker  --container-runtime=crio: (6.592001459s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-648134 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-648134 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.397711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-149976 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-149976 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (162.483726ms)

                                                
                                                
-- stdout --
	* [false-149976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22179
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 20:18:21.626167  221230 out.go:360] Setting OutFile to fd 1 ...
	I1221 20:18:21.626449  221230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:18:21.626460  221230 out.go:374] Setting ErrFile to fd 2...
	I1221 20:18:21.626464  221230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1221 20:18:21.626649  221230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-9159/.minikube/bin
	I1221 20:18:21.627195  221230 out.go:368] Setting JSON to false
	I1221 20:18:21.628741  221230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3651,"bootTime":1766344651,"procs":368,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 20:18:21.628813  221230 start.go:143] virtualization: kvm guest
	I1221 20:18:21.630406  221230 out.go:179] * [false-149976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1221 20:18:21.631897  221230 out.go:179]   - MINIKUBE_LOCATION=22179
	I1221 20:18:21.631914  221230 notify.go:221] Checking for updates...
	I1221 20:18:21.634479  221230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 20:18:21.635688  221230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22179-9159/kubeconfig
	I1221 20:18:21.636804  221230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-9159/.minikube
	I1221 20:18:21.637941  221230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 20:18:21.639077  221230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 20:18:21.641520  221230 config.go:182] Loaded profile config "force-systemd-flag-301440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1221 20:18:21.641655  221230 config.go:182] Loaded profile config "kubernetes-upgrade-291108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1221 20:18:21.641790  221230 config.go:182] Loaded profile config "stopped-upgrade-611850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1221 20:18:21.641920  221230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1221 20:18:21.670784  221230 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1221 20:18:21.670889  221230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 20:18:21.724619  221230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-21 20:18:21.714700552 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 20:18:21.724721  221230 docker.go:319] overlay module found
	I1221 20:18:21.726149  221230 out.go:179] * Using the docker driver based on user configuration
	I1221 20:18:21.727351  221230 start.go:309] selected driver: docker
	I1221 20:18:21.727366  221230 start.go:928] validating driver "docker" against <nil>
	I1221 20:18:21.727376  221230 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 20:18:21.728914  221230 out.go:203] 
	W1221 20:18:21.729950  221230 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1221 20:18:21.730897  221230 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-149976 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-149976" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:18:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291108
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:17:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-611850
contexts:
- context:
cluster: kubernetes-upgrade-291108
user: kubernetes-upgrade-291108
name: kubernetes-upgrade-291108
- context:
cluster: stopped-upgrade-611850
user: stopped-upgrade-611850
name: stopped-upgrade-611850
current-context: kubernetes-upgrade-291108
kind: Config
users:
- name: kubernetes-upgrade-291108
user:
client-certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kubernetes-upgrade-291108/client.crt
client-key: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kubernetes-upgrade-291108/client.key
- name: stopped-upgrade-611850
user:
client-certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/stopped-upgrade-611850/client.crt
client-key: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/stopped-upgrade-611850/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-149976

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-149976"

                                                
                                                
----------------------- debugLogs end: false-149976 [took: 3.365621964s] --------------------------------
helpers_test.go:176: Cleaning up "false-149976" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-149976
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (60.13s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-115092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-115092 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (50.083511485s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-115092 image pull public.ecr.aws/docker/library/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-115092 image pull public.ecr.aws/docker/library/busybox:latest: (1.379078867s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-115092
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-115092: (8.668882024s)
--- PASS: TestPreload/Start-NoPreload-PullImage (60.13s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (49.9s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-115092 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-115092 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.659277272s)
preload_test.go:77: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-115092 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (49.90s)

                                                
                                    
x
+
TestPause/serial/Start (41.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-592353 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1221 20:21:31.075430   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-592353 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.892344309s)
--- PASS: TestPause/serial/Start (41.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-611850
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-611850: (1.141609221s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.726577216s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.72s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-592353 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-592353 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.709038056s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.870891814s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-149976 "pgrep -a kubelet"
I1221 20:22:24.461673   12711 config.go:182] Loaded profile config "auto-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-149976 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-j8kcf" [49bc1a0b-7b84-45ef-ace3-bf2652510ae2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-j8kcf" [49bc1a0b-7b84-45ef-ace3-bf2652510ae2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003990898s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-149976 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-vrmrk" [3c51cc74-a77e-4f09-bbbc-20f6394e165c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003498216s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (46.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (46.075479975s)
--- PASS: TestNetworkPlugins/group/calico/Start (46.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-149976 "pgrep -a kubelet"
I1221 20:22:53.633923   12711 config.go:182] Loaded profile config "kindnet-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-149976 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zjdc8" [dbeb5550-55f1-4ae3-8396-112b3ac973fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zjdc8" [dbeb5550-55f1-4ae3-8396-112b3ac973fd] Running
E1221 20:23:00.984391   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003538587s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-149976 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.640914464s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m3.621429227s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-kv2wz" [728d2df8-a149-47d4-967a-14d51d28fb4b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-kv2wz" [728d2df8-a149-47d4-967a-14d51d28fb4b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006120481s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-149976 "pgrep -a kubelet"
I1221 20:23:44.016655   12711 config.go:182] Loaded profile config "calico-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-149976 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-b684t" [d2cba1b9-be25-4bda-9628-fcbb89f4c10c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-b684t" [d2cba1b9-be25-4bda-9628-fcbb89f4c10c] Running
E1221 20:23:48.255884   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-675499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003897074s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (43.098903111s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-149976 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-149976 "pgrep -a kubelet"
I1221 20:24:10.924441   12711 config.go:182] Loaded profile config "custom-flannel-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-149976 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lhhwc" [83758edc-a5d4-43d4-b230-6d6010efbe2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-lhhwc" [83758edc-a5d4-43d4-b230-6d6010efbe2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003507281s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-149976 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.390877412s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-149976 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-149976 "pgrep -a kubelet"
I1221 20:24:28.597658   12711 config.go:182] Loaded profile config "enable-default-cni-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-149976 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qlmvv" [3e8db34b-9347-4710-adb4-b1db5d70a190] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qlmvv" [3e8db34b-9347-4710-adb4-b1db5d70a190] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004367571s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-k95n4" [45529fc3-260c-4944-ac80-5d5adc5f139a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00363292s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-149976 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.519519852s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-149976 "pgrep -a kubelet"
I1221 20:24:42.843892   12711 config.go:182] Loaded profile config "flannel-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-149976 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hgrvv" [c5fd8b6c-d7f9-4df9-9a49-39459e0464d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hgrvv" [c5fd8b6c-d7f9-4df9-9a49-39459e0464d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003809421s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-149976 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (46.119892839s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (43.978203346s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-149976 "pgrep -a kubelet"
I1221 20:25:22.979990   12711 config.go:182] Loaded profile config "bridge-149976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-149976 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wbc4c" [88ea5244-cfe7-4eea-b095-0a4d4400bf88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wbc4c" [88ea5244-cfe7-4eea-b095-0a4d4400bf88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004533392s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-149976 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-149976 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-699289 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8c49f147-ca7a-4fd1-8d64-3e54460c48f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8c49f147-ca7a-4fd1-8d64-3e54460c48f2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.004473905s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-699289 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-699289 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-699289 --alsologtostderr -v=3: (15.975357508s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-328404 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [abf67b09-143c-43b8-862d-b90cd54af971] Pending
helpers_test.go:353: "busybox" [abf67b09-143c-43b8-862d-b90cd54af971] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [abf67b09-143c-43b8-862d-b90cd54af971] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003584384s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-328404 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (42.34424762s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-328404 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-328404 --alsologtostderr -v=3: (18.197946025s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289: exit status 7 (77.626581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-699289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-699289 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.024113237s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-699289 -n old-k8s-version-699289
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-413073 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c2722ae7-f2fd-49a5-9cff-6e02e1ffca0f] Pending
helpers_test.go:353: "busybox" [c2722ae7-f2fd-49a5-9cff-6e02e1ffca0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c2722ae7-f2fd-49a5-9cff-6e02e1ffca0f] Running
E1221 20:26:04.033967   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/addons-734405/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.007793998s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-413073 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-413073 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-413073 --alsologtostderr -v=3: (16.687672592s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404: exit status 7 (77.409994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-328404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328404 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (50.993349573s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328404 -n no-preload-328404
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073: exit status 7 (90.40728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-413073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
E1221 20:26:31.074845   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/functional-463261/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-413073 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (46.853578719s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-413073 -n embed-certs-413073
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-766361 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ea115a67-2180-409c-8faf-3057c284c92d] Pending
helpers_test.go:353: "busybox" [ea115a67-2180-409c-8faf-3057c284c92d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ea115a67-2180-409c-8faf-3057c284c92d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003754373s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-766361 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-72bcm" [432bee5e-70b1-42af-8b1c-e6f832fcc048] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003915533s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-766361 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-766361 --alsologtostderr -v=3: (16.869030139s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-72bcm" [432bee5e-70b1-42af-8b1c-e6f832fcc048] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003607516s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-699289 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-699289 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361: exit status 7 (82.402117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-766361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-766361 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (45.409143362s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766361 -n default-k8s-diff-port-766361
E1221 20:27:47.316828   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:47.322081   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:47.332347   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:47.352652   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:47.392940   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (24.232191517s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-gndgj" [ef89787a-0e59-40e7-9711-a36dd1482c60] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003212808s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-gndgj" [ef89787a-0e59-40e7-9711-a36dd1482c60] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003308616s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-328404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-mxshr" [e563eb34-17f1-43ff-a23f-ceb8b1fc5706] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003591407s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328404 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-mxshr" [e563eb34-17f1-43ff-a23f-ceb8b1fc5706] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002956238s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-413073 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-413073 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.4s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E1221 20:27:29.763112   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-162834 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.206359835s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-162834" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-162834
--- PASS: TestPreload/PreloadSrc/gcs (4.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-734511 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-734511 --alsologtostderr -v=3: (8.636062157s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.64s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.66s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-984988 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E1221 20:27:34.883400   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-984988 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.44793638s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-984988" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-984988
--- PASS: TestPreload/PreloadSrc/github (5.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511: exit status 7 (84.849517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-734511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-734511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (10.232299569s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734511 -n newest-cni-734511
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.54s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.47s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-832404 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-832404" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-832404
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.47s)
E1221 20:27:45.123968   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/auto-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-n9w5g" [a18d7a00-dbc1-44ab-936f-eb9fda84c23b] Running
E1221 20:27:47.473861   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:47.634310   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:47.954902   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:48.595955   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1221 20:27:49.876314   12711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kindnet-149976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004325627s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-734511 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-n9w5g" [a18d7a00-dbc1-44ab-936f-eb9fda84c23b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003483911s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-766361 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-766361 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (34/419)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
376 TestNetworkPlugins/group/kubenet 3.18
384 TestNetworkPlugins/group/cilium 3.9
390 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-149976 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-149976" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:18:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291108
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:17:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-611850
contexts:
- context:
cluster: kubernetes-upgrade-291108
user: kubernetes-upgrade-291108
name: kubernetes-upgrade-291108
- context:
cluster: stopped-upgrade-611850
user: stopped-upgrade-611850
name: stopped-upgrade-611850
current-context: kubernetes-upgrade-291108
kind: Config
users:
- name: kubernetes-upgrade-291108
user:
client-certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kubernetes-upgrade-291108/client.crt
client-key: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kubernetes-upgrade-291108/client.key
- name: stopped-upgrade-611850
user:
client-certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/stopped-upgrade-611850/client.crt
client-key: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/stopped-upgrade-611850/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-149976

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-149976"

                                                
                                                
----------------------- debugLogs end: kubenet-149976 [took: 3.019262109s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-149976" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-149976
--- SKIP: TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-149976 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-149976" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:18:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-flag-301440
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:18:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291108
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22179-9159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:17:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-611850
contexts:
- context:
cluster: force-systemd-flag-301440
extensions:
- extension:
last-update: Sun, 21 Dec 2025 20:18:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-301440
name: force-systemd-flag-301440
- context:
cluster: kubernetes-upgrade-291108
user: kubernetes-upgrade-291108
name: kubernetes-upgrade-291108
- context:
cluster: stopped-upgrade-611850
user: stopped-upgrade-611850
name: stopped-upgrade-611850
current-context: force-systemd-flag-301440
kind: Config
users:
- name: force-systemd-flag-301440
user:
client-certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/force-systemd-flag-301440/client.crt
client-key: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/force-systemd-flag-301440/client.key
- name: kubernetes-upgrade-291108
user:
client-certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kubernetes-upgrade-291108/client.crt
client-key: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/kubernetes-upgrade-291108/client.key
- name: stopped-upgrade-611850
user:
client-certificate: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/stopped-upgrade-611850/client.crt
client-key: /home/jenkins/minikube-integration/22179-9159/.minikube/profiles/stopped-upgrade-611850/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-149976

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-149976" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149976"

                                                
                                                
----------------------- debugLogs end: cilium-149976 [took: 3.74203667s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-149976" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-149976
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-903813" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-903813
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard